search for: 4c12

Displaying 9 results from an estimated 9 matches for "4c12".

Did you mean: 4.12
2023 Apr 03
1
WARNING: no target object found for GUID component link lastKnownParent in deleted object
...DC=campus,DC=sertao,DC=ifrs,DC=edu,DC=br Not removing dangling one-way link on deleted object (tombstone garbage collection in progress?) WARNING: no target object found for GUID component link fromServer in deleted object CN=a4ceb105-f308-4e76-84ad-b17b4a3c57c0,CN=NTDS Settings\0ADEL:c3c1d5bf-17fe-4c12-a5f8-61d68bab2e89,CN=DC4\0ADEL:fd9ce42d-697a-43ea-aeb8-c50bc832a2cb,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=campus,DC=sertao,DC=ifrs,DC=edu,DC=br - <GUID=df8357a0-8331-4c51-9009-82fb0aa23b81>;CN=NTDS Settings\0ADEL:df8357a0-8331-4c51-9009-82fb0aa23b81,CN=DC3\0ADEL:9...
2023 Apr 04
1
WARNING: no target object found for GUID component link lastKnownParent in deleted object
...Not removing dangling one-way link on deleted object (tombstone garbage > > collection in progress?) > > WARNING: no target object found for GUID component link fromServer in > > deleted object CN=a4ceb105-f308-4e76-84ad-b17b4a3c57c0,CN=NTDS > > Settings\0ADEL:c3c1d5bf-17fe-4c12-a5f8-61d68bab2e89,CN=DC4\0ADEL:fd9ce42d-697a-43ea-aeb8-c50bc832a2cb,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=campus,DC=sertao,DC=ifrs,DC=edu,DC=br > > - <GUID=df8357a0-8331-4c51-9009-82fb0aa23b81>;CN=NTDS > > Settings\0ADEL:df8357a0-8331-4c51-9009-82fb0aa...
2013 Dec 10
1
Error after crash of Virtual Machine during migration
...38467 Y 4584 Self-heal Daemon on storage-gfs-4-prd N/A Y 4590 storage-gfs-3-prd:~# gluster peer status Number of Peers: 2 Hostname: storage-1-saas Uuid: 37b9d881-ce24-4550-b9de-6b304d7e9d07 State: Peer in Cluster (Connected) Hostname: storage-gfs-4-prd Uuid: 4c384f45-873b-4c12-9683-903059132c56 State: Peer in Cluster (Connected) (from storage-1-saas)# gluster peer status Number of Peers: 2 Hostname: 172.16.3.60 Uuid: 1441a7b0-09d2-4a40-a3ac-0d0e546f6884 State: Peer in Cluster (Connected) Hostname: storage-gfs-4-prd Uuid: 4c384f45-873b-4c12-9683-903059132c56 State: Pe...
2013 Jul 17
0
Gluster 3.4.0 RDMA stops working with more then a small handful of nodes
...w successfully stop/delete the volume with a status of success: root at cs1-p:~# gluster volume stop perftest Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: perftest: success Volume Name: perftest Type: Distributed-Replicate Volume ID: ef206a76-7b26-4c12-9ccf-b3d250f36403 Status: Stopped Number of Bricks: 50 x 2 = 100 Transport-type: rdma root at cs1-p:~# gluster volume delete perftest Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: perftest: success If there is a known workaround to th...
2018 Apr 20
0
CentOS-virt Digest, Vol 128, Issue 1
...> Message: 1 > Date: Thu, 19 Apr 2018 18:11:59 +0100 > From: mql <email at ej73.com> > To: centos-virt at centos.org > Subject: [CentOS-virt] Apparent discontinuity between advertised > centos7 release 1803_01 and content of centos-release file > Message-ID: <A6EFFA5E-4C12-4D8C-8ED3-E64094153E9E at ej73.com> > Content-Type: text/plain; charset="us-ascii" > > > > Hello, > > I searched centos7 in the AWS marketplace for the at-time-of-writing-latest centos7 image: https://aws.amazon.com/marketplace/pp/B00O7WM7QW?qid=1524138193326&a...
2016 Nov 21
2
Winbind traffic not encrypted
...c35e c04d 01f5 5d44 f6ee 1c20 cb7d ...^.M..]D.....} 0x04f0: a057 3f0d cf82 5241 d7b8 8bbd 5e4a fad4 .W?...RA....^J.. 0x0500: d16d 0a04 f688 a158 89ac 951e 2051 bbf8 .m.....X.....Q.. 0x0510: 2199 a9ed 1c97 4606 b4cc 9863 c5a8 8d06 !.....F....c.... 0x0520: 5c69 85a9 2757 a815 4c12 006c accc d5f5 \i..'W..L..l.... 0x0530: ebd0 0373 f1f0 248e 7831 f59f ec5f 76f4 ...s..$.x1..._v. 0x0540: 6863 b7df 7e84 fd2a 23f9 87ec c9c0 813a hc..~..*#......: 0x0550: 45e4 3ae4 67a6 15e1 72ae 95ff 232b f9a4 E.:.g...r...#+.. 0x0560: 86c6 636c 3164 37e8 e799 6909 3299 c...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...27:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 043247cc-f85f-42d5-b76e-63b4d04c882b. sources=0 [2] sinks=1 [2017-10-25 10:40:26.362893] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on fab16947-a8e9-4c12-b805-034437fa3f71 [2017-10-25 10:40:26.366057] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on fab16947-a8e9-4c12-b805-034437fa3f71. sources=0 [2] sinks=1 [2017-10-25 10:40:26.374034] I [MSGID: 108026] [afr-self-heal-common.c:132...