Displaying 10 results from an estimated 10 matches for "brick1b".
Did you mean:
brick1
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
...>
> [2017-06-29 07:03:50.074388] I [MSGID: 100030]
> [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started running
> /usr/sbin/glusterfsd version 3.8.12 (args: /usr/sbin/glusterfsd -s
> virtnode-0-1-gluster --volfile-id
> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
> -p
> /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
> -S /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket
> --brick-name /data/glusterfs/brick1b/iso-images-repo -l
> /var/log/glusterfs/bricks/d...
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
...e, after that into the log I see:
[2017-06-29 07:03:50.074388] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version
3.8.12 (args: /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id
iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
-p
/var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
-S /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket --brick-name
/data/glusterfs/brick1b/iso-images-repo -l
/var/log/glusterfs/bricks/data-glusterfs-brick1b-iso-imag...
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
...:50.074388] I [MSGID: 100030]
>> [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started running
>> /usr/sbin/glusterfsd version 3.8.12 (args: /usr/sbin/glusterfsd
>> -s virtnode-0-1-gluster --volfile-id
>> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
>> -p
>> /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
>> -S /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket
>> --brick-name /data/glusterfs/brick1b/iso-images-repo -l
>&...
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
...og I see:
>
> [2017-06-29 07:03:50.074388] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.12
> (args: /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id
> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
> -p /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-
> gluster-data-glusterfs-brick1b-iso-images-repo.pid -S /var/run/gluster/
> c779852c21e2a91eaabbdda3b9127262.socket --brick-name
> /data/glusterfs/brick1b/iso-images-repo -l /var/log/glusterfs/bricks/
> dat...
2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
...ot present on all the
>> bricks and on all the servers:
>>
>> /data/glusterfs/brick1a/hosted-engine/.glusterfs/indices/:
>> total 0
>> drw------- 2 root root 55 Jun 28 15:02 dirty
>> drw------- 2 root root 57 Jun 28 15:02 xattrop
>>
>> /data/glusterfs/brick1b/iso-images-repo/.glusterfs/indices/:
>> total 0
>> drw------- 2 root root 55 May 29 14:04 dirty
>> drw------- 2 root root 57 May 29 14:04 xattrop
>>
>> /data/glusterfs/brick2/vm-images-repo/.glusterfs/indices/:
>> total 0
>> drw------- 2 root root 112 Jun 2...
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
...gt;> [2017-06-29 07:03:50.074388] I [MSGID: 100030] [glusterfsd.c:2454:main]
>> 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.12
>> (args: /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id
>> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
>> -p /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-glus
>> ter-data-glusterfs-brick1b-iso-images-repo.pid -S
>> /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket --brick-name
>> /data/glusterfs/brick1b/iso-images-repo -l /var/log/glusterfs/b...
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
...t;>> [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started
>>> running /usr/sbin/glusterfsd version 3.8.12 (args:
>>> /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id
>>> iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo
>>> -p
>>> /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid
>>> -S /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket
>>> --brick-name /data/glusterfs/...
2017 Jun 28
3
afr-self-heald.c:479:afr_shd_index_sweep
...it with mkdir.
In my case the 'entry-changes' directory is not present on all the
bricks and on all the servers:
/data/glusterfs/brick1a/hosted-engine/.glusterfs/indices/:
total 0
drw------- 2 root root 55 Jun 28 15:02 dirty
drw------- 2 root root 57 Jun 28 15:02 xattrop
/data/glusterfs/brick1b/iso-images-repo/.glusterfs/indices/:
total 0
drw------- 2 root root 55 May 29 14:04 dirty
drw------- 2 root root 57 May 29 14:04 xattrop
/data/glusterfs/brick2/vm-images-repo/.glusterfs/indices/:
total 0
drw------- 2 root root 112 Jun 28 15:02 dirty
drw------- 2 root root 66 Jun 28 15:02 xattrop...
2012 Feb 07
1
Recommendations for busy static web server replacement
Hi all
after being a silent reader for some time and not very successful in getting
good performance out of our test set-up, I'm finally getting to the list with
questions.
Right now, we are operating a web server serving out 4MB files for a
distributed computing project. Data is requested from all over the world at a
rate of about 650k to 800k downloads a day. Each data file is usually
2017 Jun 28
0
afr-self-heald.c:479:afr_shd_index_sweep
...ntry-changes' directory is not present on all the
> bricks and on all the servers:
>
> /data/glusterfs/brick1a/hosted-engine/.glusterfs/indices/:
> total 0
> drw------- 2 root root 55 Jun 28 15:02 dirty
> drw------- 2 root root 57 Jun 28 15:02 xattrop
>
> /data/glusterfs/brick1b/iso-images-repo/.glusterfs/indices/:
> total 0
> drw------- 2 root root 55 May 29 14:04 dirty
> drw------- 2 root root 57 May 29 14:04 xattrop
>
> /data/glusterfs/brick2/vm-images-repo/.glusterfs/indices/:
> total 0
> drw------- 2 root root 112 Jun 28 15:02 dirty
> drw------...