search for: 4f07

Displaying 10 results from an estimated 10 matches for "4f07".

Did you mean: 407
2004 Aug 18
1
paging/intercom
...nversion this evening. I can't seem to get paging to work. I have the chan_oss module loaded as per the wiki, and I have the following in my dial plan ;here is our intercom exten => 6000,1,Dial,console/dsp when I dial it here is the output from the console -- Executing Dial("SIP/3062-4f07", "console/dsp") in new stack << Call placed to 'dsp' on console >> << Auto-answered >> -- Called dsp -- OSS/dsp answered SIP/3062-4f07 << Hangup on console >> == Spawn extension (from-sip, 6000, 1) exited non-zero on 'S...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...d27a71d2-6d53-413d-b88c-33edea202cc2> <gfid:7e7f02b2-3f2d-41ff-9cad-cd3b5a1e506a> Status: Connected Number of entries: 6 Brick ipvr8.xxx:/mnt/gluster-applicatif/brick <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd> <gfid:8057d06e-5323-47ff-8168-d983c4a82475> <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558> <gfid:842b30c1-6016-45bd-9685-6be76911bd98> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471> <gfid:ff48a14a-c1d5-45c6...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...; > <gfid:7e7f02b2-3f2d-41ff-9cad-cd3b5a1e506a> > Status: Connected > Number of entries: 6 > > Brick ipvr8.xxx:/mnt/gluster-applicatif/brick > <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd> > <gfid:8057d06e-5323-47ff-8168-d983c4a82475> > <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0> > <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e> > <gfid:8261c22c-e85a-4d0e-b057-196b744f3558> > <gfid:842b30c1-6016-45bd-9685-6be76911bd98> > <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38> > <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471> &g...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...d-41ff-9cad-cd3b5a1e506a> >> Status: Connected >> Number of entries: 6 >> >> Brick ipvr8.xxx:/mnt/gluster-applicatif/brick >> <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd> >> <gfid:8057d06e-5323-47ff-8168-d983c4a82475> >> <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0> >> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e> >> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558> >> <gfid:842b30c1-6016-45bd-9685-6be76911bd98> >> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38> >> <gfid:9d041c80-b7e4-4012-a097...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...>>> Status: Connected >>> Number of entries: 6 >>> >>> Brick ipvr8.xxx:/mnt/gluster-applicatif/brick >>> <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd> >>> <gfid:8057d06e-5323-47ff-8168-d983c4a82475> >>> <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0> >>> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e> >>> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558> >>> <gfid:842b30c1-6016-45bd-9685-6be76911bd98> >>> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38> >>> <gfid:9d0...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote: > > Hello everyone, > > first time on the ML so excuse me if I'm not following well the rules, > I'll improve if I get comments. > > We got one volume "applicatif" on three nodes (2 and 1 arbiter), each > following command was made on node ipvr8.xxx: > > # gluster volume info applicatif > > Volume
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 and 1 arbiter), each following command was made on node ipvr8.xxx: # gluster volume info applicatif Volume Name: applicatif Type: Replicate Volume ID: ac222863-9210-4354-9636-2c822b332504 Status: Started
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...eal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 099093af-91a7-44a8-9300-31dddbe6b213. sources=0 [2] sinks=1 [2017-10-25 10:40:18.694584] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 5f12af51-9038-4f07-b6e3-c6b296e57e06. sources=0 [2] sinks=1 [2017-10-25 10:40:18.703985] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on a60b9c75-210a-4ff3-b5ea-0ad59e24293c. sources=0 [2] sinks=1 [2017-10-25 10:40:18.713368] I [MSGID: 108026] [afr-s...