similar to: btrfs balance fails with no space errors (despite having plenty)

Displaying 19 results from an estimated 19 matches similar to: "btrfs balance fails with no space errors (despite having plenty)"

2015 Feb 09
2
Geting mail quota exceeded with plenty of space
I have a user that is getting mail quota exceeded: Feb 9 15:00:21 z9m9z dovecot: lda(dm at htt-consult.com): Error: sieve: msgid=<38308773.1704736628308773ywdm at htt-consult.com853430>: failed to store into mailbox 'INBOX': Quota exceeded (mailbox for user is full) Yet the quota is set for 1000Mb and the current reported use is 277Mb. There are only 28 messages in the in box
2015 Feb 09
0
Geting mail quota exceeded with plenty of space
Further checkings shows another user also getting "Quota exceeded". This user has only 127Mb toward his quota. Only these two users have this problem. So far. Both are infrequent mail checkers. On 02/09/2015 03:14 PM, Robert Moskowitz wrote: > I have a user that is getting mail quota exceeded: > > > Feb 9 15:00:21 z9m9z dovecot: lda(dm at htt-consult.com): Error:
2015 Feb 09
0
Geting mail quota exceeded with plenty of space
On 02/09/2015 03:37 PM, Bertrand Caplet wrote: >> Further checkings shows another user also getting "Quota exceeded". This >> user has only 127Mb toward his quota. Only these two users have this >> problem. So far. Both are infrequent mail checkers. > It might be the quota for number of messages : Could be. dm has over 9k of trashed messages. but.. > Check
2015 Feb 09
0
Geting mail quota exceeded with plenty of space
On 02/09/2015 03:37 PM, Bertrand Caplet wrote: >> Further checkings shows another user also getting "Quota exceeded". This >> user has only 127Mb toward his quota. Only these two users have this >> problem. So far. Both are infrequent mail checkers. > It might be the quota for number of messages : that was it. Emptied trash and mail flowing. How is the message
2015 Feb 09
0
Geting mail quota exceeded with plenty of space
On 02/09/2015 04:04 PM, Bertrand Caplet wrote: >> that was it. Emptied trash and mail flowing. How is the message # >> quota managed? I never encountered it before. >> >> But don't have time today to dig into it. conference call coming up. > You might have messages quota configured somewhere. > And for : >> doveadm(root): Fatal: Unknown command
2019 Aug 23
0
plenty of vacuuuming processes
Yes, Please start with telling the running OS and samba version and an output of smb.conf And it looks like : https://bugzilla.samba.org/show_bug.cgi?id=13168 Increase the loglevels and post these also when you answer above. Greetz, Louis > -----Oorspronkelijk bericht----- > Van: samba [mailto:samba-bounces at lists.samba.org] Namens > Benedikt Kale? via samba > Verzonden:
2015 Feb 09
2
Geting mail quota exceeded with plenty of space
> that was it. Emptied trash and mail flowing. How is the message # > quota managed? I never encountered it before. > > But don't have time today to dig into it. conference call coming up. You might have messages quota configured somewhere. And for : > doveadm(root): Fatal: Unknown command 'quota', but plugin quota >exists. Try to set mail_plugins=quota See
2010 Dec 10
2
Question about "slow" storage but fast cpus, plenty of ram and dovecot
Hello We are using dovecot 1.2.x. In our setup we will have 1200 concurrent imap users (maildirs) and we have 2xraid5 sas 15k diks mounted by iSCSI. The dovecot server (RHEL 5 x64) is a virtual machine in our vmware esx cluster. We want to minimize disk I/O, what config options should we use. We can "exchange" CPU & RAM to minimize disk i/o. Should we change to dovecot 2.0?
2015 Feb 09
4
Geting mail quota exceeded with plenty of space
> Further checkings shows another user also getting "Quota exceeded". This > user has only 127Mb toward his quota. Only these two users have this > problem. So far. Both are infrequent mail checkers. It might be the quota for number of messages : Check with "doveadm quota get -u user at domain.example" If there is a limit for number of messages. Regards, --
2006 May 15
1
Memory allocation fails in R 2.2.1 and R 2.3.0 on SGI Irix, while plenty of memory available (PR#8861)
Dear R developers, We have a big SGI Origin computation server with 32 cpu's and 64 Gb of RAM. In R 2.0.0 we could run large jobs, allocating 8 Gb of RAM was not a problem, for example by running: > v1 <- seq(1,2^29) > v2 <- seq(1,2^29) > v3 <- seq(1,2^29) > v4 <- seq(1,2^29) This yields an R process, consuming about 8 Gb of RAM: PID PGRP USERNAME
2008 Aug 05
3
doveot reporting "No space left on device" - yet df show plenty of space / inodes.
Hi, I am running dovecot 1.0.rc7 on a Suse Linux server. The server has approx 200+ mailboxes. Last week the filesystem (/dev/mapper/datavg/dat2lv) ran out of space - causing it to do into read-only mode. When I realised this I allocated some more space and re-booted the machine... Strangely it seems that dovecot is still having problems... It's like dovecot doesn't realise that
2015 Feb 09
2
Geting mail quota exceeded with plenty of space
> doveadm(root): Fatal: Unknown command 'quota', but plugin quota exists. > Try to set mail_plugins=quota Show me your doveconf -n without your passwords. -- CHUNKZ.NET - script kiddie and computer technician Bertrand Caplet, Flers (FR) Feel free to send encrypted/signed messages Key ID: FF395BD9 GPG FP: DE10 73FD 17EB 5544 A491 B385 1EDA 35DC FF39 5BD9 -------------- next part
2019 Aug 23
2
plenty of vacuuuming processes
Hi, Oh sorry, of course: The running os is debian 9.9 and I'm running the sernet-samba-ctdb in version 4.9.11-15 This is my configuration: [global] ??? winbind refresh tickets = Yes ??? winbind use default domain = yes ??? template shell = /bin/bash ??? idmap config * : range = 1000000 - 1999999 ??? idmap config ZFD : backend = rid ??? idmap config ZFD : range = 0 - 200000 ??? hide dot
2012 Aug 25
9
[Bug 54056] New: plenty of errors trapped write at plus PGRAPH error
https://bugs.freedesktop.org/show_bug.cgi?id=54056 Bug #: 54056 Summary: plenty of errors trapped write at plus PGRAPH error Classification: Unclassified Product: xorg Version: git Platform: Other OS/Version: All Status: NEW Severity: normal Priority: medium Component: Driver/nouveau
2019 Aug 23
2
plenty of vacuuuming processes
Hi, I have a ctdb cluster with 3 nodes and 3 glusterfs (version 6) nodes up and running. I observe plenty of these situations: A connected Windows-10 client doesn't react anymore. I use forder redirections.? - Smbstatus shows up some (auth in progress) processes. - In the logs of a ctdb node I get: Aug 23 10:12:29 ctdb-1 ctdbd[2167]: Ending traverse on DB locking.tdb (id 568831), records
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Strahil and Gluster users, Yes I had checked but, checked again and only 1% inode usage. 99% free. Same every node. Example: [root at nybaknode1 ]# df -i /lvbackups/brick Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742 1% /lvbackups [root at nybaknode1 ]# I neglected to clarify in
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2008 Oct 10
1
[PATCH] fix enospc when there is plenty of space
Hello, So there is an odd case where we can possibly return -ENOSPC when there is in fact space to be had. I think I finally have a hold on what the problem is, it only happens with Metadata writes, and happens _very_ infrequently. What has to happen is we have to allocate have allocated out of the first logical byte on the disk, which would set last_alloc to first_logical_byte(root, 0), so