search for: a622

Displaying 7 results from an estimated 7 matches for "a622".

Did you mean: 2622
2003 Jan 19
3
All data "gone," lost+found is left.
...erved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal UUID: <none> Journal inode: 8 Journal device: 0x0000 First orphan inode: 0 Default directory hash: tea Directory Hash Seed: 0c75b950-e953-450b-a622-0108fc922004 Any help is appreciated. Thanks, Pat
2003 Nov 20
11
Problem running SSH on IBM PPC440 processor, help appreciated
...d check bytes on input. packet_send plain: 1a03 bc22 1896 9f52 0100 0000 1f43 6f72 7275 7074 6564 2063 6865 636b 2062 7974 6573 206f 6e20 696e 7075 742e 1f61 9863 encrypted: 0000 0028 f66c 265c b9d8 734d a622 507a 0ca2 1c47 37d5 6b23 d1e0 1bdd ed09 6461 3b87 df96 14a9 3698 d5e6 ee48 b758 8461 a2aa 0fc7 sh-2.05# RUNNING WITH -2 OPTION (partial dump) ------------------------------------- plain: 0000 0000 0015 encrypted: 0000 000c 0a15 0000 0000 0000 0000 0000...
2011 Sep 06
1
Inconsistent md5sum of replicated file
I was wondering if anyone would be able to shed some light on how a file could end up with inconsistent md5sums on Gluster backend storage. Our configuration is running on Gluster v3.1.5 in a distribute-replicate setup consisting of 8 bricks. Our OS is Red Hat 5.6 x86_64. Backend storage is an ext3 RAID 5. The 8 bricks are in RR DNS and are mounted for reading/writing via NFS automounts.
2003 Dec 01
0
No subject
...Linux 8.9.3-0.1) id RAA00673 for samba@samba.org; Mon, 28 May 2001 17:23:50 +0200 Date: Mon, 28 May 2001 17:23:50 +0200 From: Christian Fertig <cf@baer-ib.de> To: samba@samba.org Subject: Re: smbclient (was Re: smbfs i/o error with large files (>10 MB)) Message-ID: <20010528172349.A622@ny.baer-ib.de> Mail-Followup-To: Christian Fertig <cf@baer-ib.de>, samba@samba.org References: <20010521131753.A2037@ny.baer-ib.de> <Pine.LNX.4.30.0105211549500.3118-100000@cola.teststation.co <Pine.LNX.4.30.0105211549500.3118-100000@cola.teststation.co m> Mime-Version:...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...r_log_selfheal] 0-home-replicate-0: Completed data selfheal on b08b345a-c9dc-4a78-bf97-97972ece8dd7. sources=0 [2] sinks=1 [2017-10-25 10:40:35.201903] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 35efed04-d15e-451b-a622-f0fc4f842ec7 [2017-10-25 10:40:35.207517] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 35efed04-d15e-451b-a622-f0fc4f842ec7. sources=0 [2] sinks=1 [2017-10-25 10:40:35.215576] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr...