search for: diskusag

Displaying 8 results from an estimated 8 matches for "diskusag".

Did you mean: diskusage
2010 Mar 04
1
[3.0.2] booster + unfsd failed
...tvolume_cbk] 192.168.1.128-1: Connected to 192.168.1.128:6996, attached to remote volume 'brick1'. [2010-03-04 13:48:16] N [client-protocol.c:6246:client_setvolume_cbk] 192.168.1.128-1: Connected to 192.168.1.128:6996, attached to remote volume 'brick1'. [2010-03-04 13:48:16] D [dht-diskusage.c:71:dht_du_info_cbk] distribute: on subvolume '192.168.1.127-1': avail_percent is: 96.00 and avail_space is: 66246918144 [2010-03-04 13:48:16] D [dht-diskusage.c:71:dht_du_info_cbk] distribute: on subvolume '192.168.1.128-1': avail_percent is: 97.00 and avail_space is: 67548491776...
2007 Sep 17
3
change uid/god below 100
Hi. Only indirectly related to zfs. I need to test diskusage/performance on zfs shared via nfs. I have installed nevada b64a. Historically uid/gid for user www has been 16/16 but when I try to add uid/gid www via smc with the value 16 I''m not allowed to do so. I''m coming from a FreeBSD backgroup. Here I alter uid using vipw and edit /etc/...
2010 Mar 15
1
Glusterfs 3.0.X crashed on Fedora 11
...ached to remote volume 'brick8'. [2010-03-14 22:46:48] D [addr.c:190:gf_auth] brick3: allowed = "*", received addr = "192.168.1.155" [2010-03-14 22:46:48] N [server-protocol.c:5852:mop_setvolume] server: accepted client from 192.168.1.155:998 [2010-03-14 22:46:48] D [dht-diskusage.c:71:dht_du_info_cbk] dist: on subvolume 'rep1': avail_percent is: 99.00 and avail_space is: 952169959424 [2010-03-14 22:46:48] D [dht-diskusage.c:71:dht_du_info_cbk] dist: on subvolume 'rep2': avail_percent is: 99.00 and avail_space is: 984271716352 [2010-03-14 22:46:48] D [dht-di...
2007 Oct 14
6
accurate file size
Hello I was copying some files from one server to other, that I relized the total file size ( sum of all files ) in one server is a bit more than the one that copied from ( about 6 when I do du -s ) individual file sizes are identical, when I do one by one file comparison, but the sum is different. is there any more accurate way to make sure of integrity of the file. ( other than pgp or
2009 Apr 15
2
Quota for Shared Folders
Good morning list, first of all: dovecot works really great, the performance is overwhelming (especially compared to courier), the configuration flexible as hell, it is good documented - I love this software. But as things get complicated, I think I need some additional help. I'm using dovecot to replace the currently used courier-mailserver in a shared hosting environment based on
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
...returned -1 error: No such file or directory [No such file or directory] [2017-09-20 13:34:23.352424] D [fuse-resolve.c:61:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/tempdir3: failed to resolve (No such file or directory) [2017-09-20 13:34:23.352749] D [MSGID: 0] [dht-diskusage.c:96:dht_du_info_cbk] 0-gv0-dht: subvolume 'gv0-replicate-0': avail_percent is: 99.00 and avail_space is: 21425758208 and avail_inodes is: 99.00 [2017-09-20 13:34:23.353086] D [MSGID: 0] [afr-transaction.c:1934:afr_post_nonblocking_entrylk_cbk] 0-gv0-replicate-0: Non blocking entrylks...
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently
2017 Sep 19
3
"Input/output error" on mkdir for PPC64 based client
I recently compiled the 3.10-5 client from source on a few PPC64 systems running RHEL 7.3. They are mounting a Gluster volume which is hosted on more traditional x86 servers. Everything seems to be working properly except for creating new directories from the PPC64 clients. The mkdir command gives a "Input/output error" and for the first few minutes the new directory is