Displaying 12 results from an estimated 12 matches for "3.1m".
Did you mean:
3.11
2020 Feb 24
0
Problem with swap?
Hello,
today i typed "htop" for controlling my ressources. I could see that my
swap is neraly 100%. This problem occurs since start of the server,
about 3 year ago. Its not a critical issue for me, because the server is
running fine. Several times i incereased the size of swap.
Today 9,3GB of 10GB swap are allocated (33 day uptime). My system is
still running and i have no
2008 Jan 23
1
FreeBSD 6.3-Release + squid 2.6.17 = Hang process.
Hi:
We have a machine running 6.2-R-p10 and squid 2.6.17,
and upgrade it to 6.3R yesterday,
but squid will hang and eat 100% cpu time after restart about 1 hour later,
machine still alive, and no response from squid.
downgrade to 6.2-R-p10, everything ok again..
here is some infomations:
machine type:
FreeBSD 6.3-RELEASE #0: Wed Jan 23 01:58:39 CST 2008
CPU: Intel(R) Xeon(TM) CPU 2.40GHz
2008 Jul 24
2
You didn't give me some packages, so now I'm giving you some! R, TexLive, LyX, Gnumeric, etc.
I want up-to-dateish versions of TexLive, R, gnumeric, emacs, but on a
more-or-less stable base of Centos-5.2. I asked for packages in this,
but got no answers. So now I've built them and will let you try them
if you want. I used the source packages from Fedora 8 and 9.
I wanted TexLive because many of us have jumped ship to Ubuntu Linux
8.04 and it does offer TexLive, and the compatability
2008 Aug 26
3
Dovecot - T-Bird - RETR command failed
I have recently installed new untangle firewalls ( untangle.com) in
Bridge mode at two office locations. Both offices collect mail from a
Fedora 9, postfix,dovecot server at location #1. Since the install of
the untangles i have been plagued by Thunderbird Errors while trying to
POP email.
The error says "RETR" command failed. If the user logs into the webmail
(Squirrelmail) on the
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2016 Jul 12
1
Testing a forest trusts in Samba 4.4.5 AD environment
Database size would interest us here, with and without trust if you have
these metrics. Global catalog is supposed to stored some attributes of
almost all objects of all trusted domains, if me understanding is correct
and we have no real idea about what that means in concrete terms.
2016-07-12 12:55 GMT+02:00 Alex Crow <acrow at integrafin.co.uk>:
> On 12/07/16 09:36, mathias dufresne
2007 Feb 01
2
Indexing Performance Question (was tpop3d vs dovecot)
Since posting the previous thread we've setup a new system (Opteron
2.0ghz, 1gb ram, Fedora 6) for testing. I am hoping somebody very
familiar with the indexing portion of dovecot can shed some light on
this for me.
After much testing, I've come to one primary conclusion that dovecot
is possibly unnecessarily scanning or reading files within the
maildir directories. Take a mailbox
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0
2013 Feb 25
1
lmtp problem with wrong index path
Hello,
we've been using dovecot for pop3 and imap for some time now and we're
in the middle of deploying lmtp as well, however we're run into a
problem we can't solve.
Specifically for some reason it seems that dovecot tries to write to the
wrong index file during some, but not all, lmtp deliveries.
If lmtp tries to deliver to person user_a at domain, sometimes it'll try to
2023 Jul 04
1
remove_me files building up
Hi Strahil,
We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79%
2019 Apr 30
6
Disk space and RAM requirements in docs
Hi,
Have anybody recently built LLVM in Debug mode /within/ space
requirements from the Getting Started doc?
https://llvm.org/docs/GettingStarted.html#hardware
> An LLVM-only build will need about 1-3 GB of space. A full build of
LLVM and Clang will need around 15-20 GB of disk space.
From my experience this numbers looks drastically low. On FreeBSD my
recent builds consumed more than
2023 Jul 04
1
remove_me files building up
Hi Liam,
I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low.
If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future.
Of course, always