similar to: Inconsistent md5sum of replicated file

Displaying 20 results from an estimated 2000 matches similar to: "Inconsistent md5sum of replicated file"

2011 Sep 07
2
Gluster-users Digest, Vol 41, Issue 16
Hi Phil, we?d the same Problem, try to compile with debug options. Yes this sounds strange but it help?s when u are using SLES, the glusterd works ok and u can start to work with it. just put exportCFLAGS='-g3 -O0' between %build and %configure in the glusterfs spec file. But be warned don?t use it with important data especially when u are planing to use the replication feature,
2011 Aug 24
1
Input/output error
Hi, everyone. Its nice meeting you. I am poor at English.... I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want to change from gluster mount to nfs mount. I have installed GlusterFS 3.2.1 one week ago,and replication 2 server. OS:CentOS5.5 64bit RPM:glusterfs-core-3.2.1-1 glusterfs-fuse-3.2.1-1 command gluster volume create syncdata replica 2 transport tcp
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04). I've created a replicated volume with the 4 machines. Then on the client machine i've executed: mount -t glusterfs gluster01:/volume01 /mnt/gluster And everything works ok. The main problem occurs in every client machine that I do: umount /mnt/gluster and the mount -t glusterfs gluster01:/volume01 /mnt/gluster The client
2011 Aug 21
2
Fixing split brain
Hi Consider the typical spit brain situation: reading from file gets EIO, logs say: [2011-08-21 13:38:54.607590] W [afr-open.c:168:afr_open] 0-gfs-replicate-0: failed to open as split brain seen, returning EIO [2011-08-21 13:38:54.607895] W [fuse-bridge.c:585:fuse_fd_cbk] 0-glusterfs-fuse: 1371456: OPEN() /manu/netbsd/usr/src/gnu/dist/groff/doc/Makefile.sub => -1 (Input/output
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover that when one of replicate node reboot and startup the glusterd daemon,the gluster will crash cause by the other replicate node cpu usage reach 100%. Our gluster info: Type: Distributed-Replicate Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Options Reconfigured: performance.cache-size: 3GB
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test. -b ----- Original Message ----- > From: "Pat Haley" <phaley at mit.edu> > To: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > Cc: "Ravishankar N" <ravishankar at redhat.com>,
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys, I was wondering what our next steps should be to solve the slow write times. Recently I was debugging a large code and writing a lot of output at every time step. When I tried writing to our gluster disks, it was taking over a day to do a single time step whereas if I had the same program (same hardware, network) write to our nfs disk the time per time-step was about 45 minutes.
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day. Below is gdb result: (gdb) where #0 0x0000003267432885 in raise () from /lib64/libc.so.6 #1 0x0000003267434065 in abort () from /lib64/libc.so.6 #2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6 #3 0x00000032674750c6 in malloc_printerr () from
2017 Jun 26
3
Slow write times to gluster disk
Hi All, Decided to try another tests of gluster mounted via FUSE vs gluster mounted via NFS, this time using the software we run in production (i.e. our ocean model writing a netCDF file). gluster mounted via NFS the run took 2.3 hr gluster mounted via FUSE: the run took 44.2 hr The only problem with using gluster mounted via NFS is that it does not respect the group write permissions which
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben, Sorry this took so long, but we had a real-time forecasting exercise last week and I could only get to this now. Backend Hardware/OS: * Much of the information on our back end system is included at the top of http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html * The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY V.4 6TB
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > > Hi, > > Today we experimented with some of the FUSE options that we found in the > list. > > Changing these options had no effect: > > gluster volume set test-volume performance.cache-max-file-size 2MB > gluster volume set test-volume performance.cache-refresh-timeout 4 > gluster
2017 Jun 27
0
Slow write times to gluster disk
On Mon, Jun 26, 2017 at 7:40 PM, Pat Haley <phaley at mit.edu> wrote: > > Hi All, > > Decided to try another tests of gluster mounted via FUSE vs gluster > mounted via NFS, this time using the software we run in production (i.e. > our ocean model writing a netCDF file). > > gluster mounted via NFS the run took 2.3 hr > > gluster mounted via FUSE: the run took
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know. Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Monday, July 10, 2017 8:31 AM To: Sanoj Unnikrishnan Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost Ram,
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri < pkarampu at redhat.com> wrote: > > > On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > >> >> Hi, >> >> Today we experimented with some of the FUSE options that we found in the >> list. >> >> Changing these options had no effect: >> >>
2017 Jun 22
0
Slow write times to gluster disk
Hi, Today we experimented with some of the FUSE options that we found in the list. Changing these options had no effect: gluster volume set test-volume performance.cache-max-file-size 2MB gluster volume set test-volume performance.cache-refresh-timeout 4 gluster volume set test-volume performance.cache-size 256MB gluster volume set test-volume performance.write-behind-window-size 4MB gluster
2017 Jul 07
2
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > Pranith, > > Thanks for looking in to the issue. The bricks were > mounted after the reboot. One more thing that I noticed was when the > attributes were manually set when glusterd was up then on starting the > volume the attributes were again lost. Had to stop glusterd
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Please use the systemtap script( https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr calls. It prints the pid, tid and arguments of all removexattr calls. I have checked for these fops at the protocol/client and posix translators. To run the script .. 1) install systemtap and dependencies. 2) install glusterfs-debuginfo 3) change the path
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
Did anything special happen on these two bricks? It can't happen in the I/O path: posix_removexattr() has: 0 if (!strcmp (GFID_XATTR_KEY, name)) { 1 gf_msg (this->name, GF_LOG_WARNING, 0, P_MSG_XATTR_NOT_REMOVED, 2 "Remove xattr called on gfid for file %s", real_path); 3 op_ret = -1; 4 goto
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
Ram, As per the code, self-heal was the only candidate which *can* do it. Could you check logs of self-heal daemon and the mount to check if there are any metadata heals on root? +Sanoj Sanoj, Is there any systemtap script we can use to detect which process is removing these xattrs? On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com> wrote: >
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > 3.7.19 > These are the only callers for removexattr and only _posix_remove_xattr has the potential to do removexattr as posix_removexattr already makes sure that it is not gfid/volume-id. And surprise surprise _posix_remove_xattr happens only from healing code of afr/ec. And this can only happen