Displaying 20 results from an estimated 1000 matches similar to: "FileSize changing in GlusterNodes"
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi,
Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .????
Thanks & Regards,
Bobby Jacob
Senior Technical Systems Engineer | eGroup
P SAVE TREES. Please don't print this e-mail unless you really need to.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Sep 19
2
Support for GlusterFS
Hi,
Is there an option to procure support for glusterfs deployment. ? As we moving into core production scenarios with glusterfs in mind, it would be slightly relieving to have this confirmation !!
Thanks & Regards,
Bobby Jacob
P SAVE TREES. Please don't print this e-mail unless you really need to.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi,
I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ?
Thanks & Regards,
Bobby Jacob
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi,
I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used
by 4 clients.
Sometimes from some clients I can't access some of the files. After I force
a full heal on the brick I see several files healed. Is this behavior
normal?
Thanks
--
Paulo Silva <paulojjs at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick
2013 Jul 09
2
Gluster Self Heal
Hi,
I have a 2-node gluster with 3 TB storage.
1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes.
2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting.
Please advice on how I can maintain
2013 Aug 22
2
Error when creating volume
Hello,
I've removed a volume and I can't re-create it :
gluster volume create gluster-export gluster-6:/export gluster-5:/export
gluster-4:/export gluster-3:/export
/export or a prefix of it is already part of a volume
I've formatted the partition and reinstalled the 4 gluster servers and
the error still appears.
Any idea ?
Thanks.
--
-------------- next part --------------
2017 Aug 29
2
error msg in the glustershd.log
Hi ,
I need some clarification for below error msg in the glustershd.log file.
What is this msg? Why is this showing up?. currently using glusterfs 3.10.1
when ever I start write process to volume (volume mounted thru fuse) I am
seeing this kind of error and glustershd process consumes some percentage
of cpu until write process completes.
[2017-08-28 10:01:13.030710] W [MSGID: 122006]
2017 Aug 29
2
error msg in the glustershd.log
I am using 3.10.1 from which version this update is available.
On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> wrote:
>
> Whenever we do some fop on EC volume on a file, we check the xattr also to
> see if the file is healthy or not. If not, we trigger heal.
> lookup is the fop for which we don't take inodelk lock so it is possible
> that the
2017 Aug 29
0
error msg in the glustershd.log
Whenever we do some fop on EC volume on a file, we check the xattr also to see if the file is healthy or not. If not, we trigger heal.
lookup is the fop for which we don't take inodelk lock so it is possible that the xattr which we get for lookup fop are different for some bricks.
This difference is not reliable but still we are triggering heal and that is why you are seeing these messages.
2017 Aug 31
0
error msg in the glustershd.log
Ashish, which version has this issue fixed?
On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P <amudhan83 at gmail.com> wrote:
> I am using 3.10.1 from which version this update is available.
>
>
> On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com>
> wrote:
>
>>
>> Whenever we do some fop on EC volume on a file, we check the xattr also
2017 Aug 31
1
error msg in the glustershd.log
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287
it has been fixed in glusterfs-3.11.0
---
Ashish
----- Original Message -----
From: "Amudhan P" <amudhan83 at gmail.com>
To: "Ashish Pandey" <aspandey at redhat.com>
Cc: "Gluster Users" <gluster-users at gluster.org>
Sent: Thursday, August 31, 2017 1:07:16 PM
Subject:
2016 Oct 19
10
[PATCH v4 0/5] implement vcpu preempted check
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
2016 Oct 19
10
[PATCH v4 0/5] implement vcpu preempted check
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code
2007 Aug 13
8
disk performance about half in domU? + question about XenSource
Based on some tests we ran, it seems the biggest performance
hit you get from running within domU is from disk I/O. We
did some mysql read and write tests, and for both, our
performance is about half that compared to native. Has that
been others'' experience?
Is there any way to make this better? We are using physical
partitions. In contrast, cpu/memory tests appear to be near
2011 Jun 24
10
[PATCH 0/9] remove i_alloc_sem V2
i_alloc_sem has always been a bit of an odd "lock". It''s the only remaining
rw_semaphore that can be released by a different thread than the one that
locked it, and it''s use case in the core direct I/O code is more like a
counter given that the writers already have external serialization.
This series removes it in favour of a simpler counter scheme, thus getting
rid
2008 Oct 01
5
Xm Create Image Path
All,
Is is possible for xm create foo.cfg to traverse a symlink to access the foo.img. Here''s an example config and the resulting error..
---
name = "jim"
memory = "512"
disk = [ ''phy:/dev/VolGroup00/foo,xvda,r'',
''tap:aio:/home/fred/local/jim/build/foo_swap.img,xvdd,w'',
2013 Dec 10
4
Structure needs cleaning on some files
Hi All,
When reading some files we get this error:
md5sum: /path/to/file.xml: Structure needs cleaning
in /var/log/glusterfs/mnt-sharedfs.log we see these errors:
[2013-12-10 08:07:32.256910] W
[client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote
operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
[client-rpc-fops.c:526:client3_3_stat_cbk]