Displaying 7 results from an estimated 7 matches for "f46b".
Did you mean:
c46b
2008 Jan 15
1
cisco ip phne 7911G with asterisk
...uld ask SEP<mac>.xml.cnf. I don't know if I'm doing something bad or if it could be a issue of the firmware version.
I would thank some clue. Thanks,
Christian Pinedo Zamalloa (zako)
PGP key at: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x828D0C80
Fingerprint: 7BFF 4105 F46B 7977 BD96 348C 1007 4FF8 828D 0C80
______________________________________________
Web Revelaci?n Yahoo! 2007:
Premio Favorita del P?blico.
http://es.promotions.yahoo.com/revelacion2007/favoritos/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://l...
2017 Nov 14
0
file changed as we read it
...rver-3.12.1-2.el7.x86_64
# gluster volume get home cluster.consistent-metadata
Option Value
------ -----
cluster.consistent-metadata on
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: sphere-six:/srv/gluster_home/brick
Brick2: sphere-five:/srv/gluster_home/brick
Brick3: sphere-four:/srv/gluster_home/brick
Options Reconfigured:
features.quota-deem-statfs: on
fe...
2017 Nov 19
0
gluster share as home
...ate any feedback I can get about gluster
volumes as small file storage, optimizations, potential problems.
Thank's a lot!
Cheers
Richard
PS: our current gluster home setup, all clients are using the fuse client
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: sphere-six:/srv/gluster_home/brick
Brick2: sphere-five:/srv/gluster_home/brick
Brick3: sphere-four:/srv/gluster_home/brick
Options Reconfigured:
features.quota-deem-statfs: on
fe...
2017 Oct 26
0
not healing one file
...taking a look at this. I'm not working with gluster long
> enough to make heads or tails out of the logs. The logs are attached to
> this mail and here is the other information:
>
> # gluster volume info home
>
> Volume Name: home
> Type: Replicate
> Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
> Status: Started
> Snapshot Count: 1
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: sphere-six:/srv/gluster_home/brick
> Brick2: sphere-five:/srv/gluster_home/brick
> Brick3: sphere-four:/srv/gluster_home/brick
> Options Re...
2017 Oct 26
2
not healing one file
Hi Karthik,
thanks for taking a look at this. I'm not working with gluster long
enough to make heads or tails out of the logs. The logs are attached to
this mail and here is the other information:
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: sphere-six:/srv/gluster_home/brick
Brick2: sphere-five:/srv/gluster_home/brick
Brick3: sphere-four:/srv/gluster_home/brick
Options Reconfigured:
features.barrier: disable
cluster...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next