search for: 475e

Displaying 12 results from an estimated 12 matches for "475e".

Did you mean: 475
2013 Jun 19
1
Fedora 18 dom0, no video?
...insmod ext2 set root=''hd0,msdos2'' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos2 --hint-efi=hd0,msdos2 --hint-baremetal=ahci0,msdos2 --hint=''hd0,msdos2'' 5e9b3ecc-1013-475e-9115-8869373c5f99 else search --no-floppy --fs-uuid --set=root 5e9b3ecc-1013-475e-9115-8869373c5f99 fi echo ''Loading Xen xen ...'' multiboot /xen.gz placeholder echo ''Loading Linux 3.9.5-201.fc18.x86_64 ...'...
2008 Aug 26
0
Problem with Roaming Profiles
...|gustavo|192.168.5.38|gustavom|profiles|unlink|ok|gustavo/Contacts/gustavo2078@yahoo.com.br/70CE567 3-4D66-435A-81FA-52B24520B7B7.WindowsLiveContact Aug 25 08:33:32 localhost smbd_audit: 30829|gustavo|192.168.5.38|gustavom|profiles|unlink|ok|gustavo/Contacts/gustavo2078@yahoo.com.br/BD86C4B F-6D9B-475E-8C12-D1EAF5B0ACDF.WindowsLiveContact . . . . And then the whole profile was unlinked, and a new one was created on the server O.o My profile share in smb.conf: [profiles] path = /home/profiles read only = No create mask = 0600 directory mask = 0700 browse...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...4da3-8abe-819670c70906> <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c> <gfid:bb0352b9-4a5e-4075-9179-05c3a5766cf4> <gfid:40133fcf-a1fb-4d60-b169-e2355b66fb53> <gfid:00f75963-1b4a-4d75-9558-36b7d85bd30b> <gfid:2c0babdf-c828-475e-b2f5-0f44441fffdc> <gfid:bbeff672-43ef-48c9-a3a2-96264aa46152> <gfid:6c0969dd-bd30-4ba0-a7e5-ba4b3a972b9f> <gfid:4c81ea14-56f4-4b30-8fff-c088fe4b3dff> <gfid:1072cda3-53c9-4b95-992d-f102f6f87209> <gfid:2e8f9f29-78f9-4402-bc0c-e63af8cf77d6> <gfid:eeaa2765-44f4-4891...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...; <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b> > <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c> > <gfid:bb0352b9-4a5e-4075-9179-05c3a5766cf4> > <gfid:40133fcf-a1fb-4d60-b169-e2355b66fb53> > <gfid:00f75963-1b4a-4d75-9558-36b7d85bd30b> > <gfid:2c0babdf-c828-475e-b2f5-0f44441fffdc> > <gfid:bbeff672-43ef-48c9-a3a2-96264aa46152> > <gfid:6c0969dd-bd30-4ba0-a7e5-ba4b3a972b9f> > <gfid:4c81ea14-56f4-4b30-8fff-c088fe4b3dff> > <gfid:1072cda3-53c9-4b95-992d-f102f6f87209> > <gfid:2e8f9f29-78f9-4402-bc0c-e63af8cf77d6> &g...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...aae8-4ca5-b28c-e87a3e599c9b> >> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c> >> <gfid:bb0352b9-4a5e-4075-9179-05c3a5766cf4> >> <gfid:40133fcf-a1fb-4d60-b169-e2355b66fb53> >> <gfid:00f75963-1b4a-4d75-9558-36b7d85bd30b> >> <gfid:2c0babdf-c828-475e-b2f5-0f44441fffdc> >> <gfid:bbeff672-43ef-48c9-a3a2-96264aa46152> >> <gfid:6c0969dd-bd30-4ba0-a7e5-ba4b3a972b9f> >> <gfid:4c81ea14-56f4-4b30-8fff-c088fe4b3dff> >> <gfid:1072cda3-53c9-4b95-992d-f102f6f87209> >> <gfid:2e8f9f29-78f9-4402-bc0c...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...e599c9b> >>> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c> >>> <gfid:bb0352b9-4a5e-4075-9179-05c3a5766cf4> >>> <gfid:40133fcf-a1fb-4d60-b169-e2355b66fb53> >>> <gfid:00f75963-1b4a-4d75-9558-36b7d85bd30b> >>> <gfid:2c0babdf-c828-475e-b2f5-0f44441fffdc> >>> <gfid:bbeff672-43ef-48c9-a3a2-96264aa46152> >>> <gfid:6c0969dd-bd30-4ba0-a7e5-ba4b3a972b9f> >>> <gfid:4c81ea14-56f4-4b30-8fff-c088fe4b3dff> >>> <gfid:1072cda3-53c9-4b95-992d-f102f6f87209> >>> <gfid:2e8...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote: > > Hello everyone, > > first time on the ML so excuse me if I'm not following well the rules, > I'll improve if I get comments. > > We got one volume "applicatif" on three nodes (2 and 1 arbiter), each > following command was made on node ipvr8.xxx: > > # gluster volume info applicatif > > Volume
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 and 1 arbiter), each following command was made on node ipvr8.xxx: # gluster volume info applicatif Volume Name: applicatif Type: Replicate Volume ID: ac222863-9210-4354-9636-2c822b332504 Status: Started
2005 Dec 21
9
question about changejournal
Hi, I''ve got a newbie question--sorry if this is covered elsewhere, I parsed through the archives for awhile and didn''t see it. I''d like to listen for whenever a file is renamed (e.g. foo.txt -> foo.old) and then magically change it back. This sounds odd, but I''m working with a stubborn application and this will actually make things work nice. So, if I do:
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...eal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 0e4473f0-f631-4cd0-ab18-3cce07e91954. sources=0 [2] sinks=1 [2017-10-25 10:40:35.228196] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 0e18f2da-e4ed-475e-9137-309501721f2d. sources=0 [2] sinks=1 [2017-10-25 10:40:35.252484] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 8f8baaee-8713-495a-bba1-9381891895e6. sources=0 [2] sinks=1 [2017-10-25 10:40:35.256883] I [MSGID: 108026] [afr-s...