Displaying 20 results from an estimated 4000 matches similar to: "Questions about snapshots"
2024 Jan 23
1
Questions about snapshots
Hi Stefan,
I'm not sure which doc are you referring to? It would help if you can share
it.
I would start looking at the barrier translator in
xlators/features/barrier. Initially it is designed to block all the ops
that modify the underlying file system from acknowledging the client. The
list of file system operations that needs to be blocked should be listed
here
2013 Jan 08
1
help me, glusterfs 3.3 doen't support fopen-keep-cache?
Dear gluster experts,
I search through glusterfs 3.3 source tree and can't find any fuse
open option FOPEN_KEEP_CACHE related code. Does this mean that
glusterfs 3.3 doen't support fuse keep cache feature? However I did
find keep cache code in mainline. My question is:
1 does glusterfs 3.3 support page cache?
2 if not what is the best practice to improve performance if a file is
2024 Jan 23
1
Questions about snapshots
Hi Varun,
Am 23.01.24 um 01:37 schrieb Varun:
> I'm not sure which doc are you referring to? It would help if you can share
> it.
Here,
https://docs.gluster.org/en/main/Administrator-Guide/Managing-Snapshots/#pre-requisites
  and all the places where this page is copied to ;-)
Thank's for the link to the source code, but source code is no 
documentation ;-)
-------------- next
2011 Aug 24
1
Input/output error
Hi, everyone.
Its nice meeting you.
I am poor at English....
I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want
to change from gluster mount to nfs mount.
I have installed GlusterFS 3.2.1 one week ago,and replication 2 server.
OS:CentOS5.5 64bit
RPM:glusterfs-core-3.2.1-1
    glusterfs-fuse-3.2.1-1
command
 gluster volume create syncdata replica 2  transport tcp
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news.
Is this planned to be published in next release?
Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
scritto:
> Thanks for that update. Very happy to hear it ran fine without any issues.
> :)
>
> Yeah so you can ignore those 'No such file or directory' errors. They
> represent a transient state where DHT in the client process
2018 May 02
3
Healing : No space left on device
Hello list,
I have an issue on my Gluster cluster. It is composed of two data nodes
and an arbiter for all my volumes.
After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this is
what I get :
??? - on node 1, volumes won't start, and glusterd.log shows a lot of :
??? ??? [2018-05-02 09:46:06.267817] W
[glusterd-locks.c:843:glusterd_mgmt_v3_unlock]
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64 
client and an x86 client. Weirdly the client logs were almost identical.
Here's the ppc64 gluster client log of attempting to create a folder...
-------------
[2017-09-20 13:34:23.344321] D 
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (--> 
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi,
Did you get a chance to verify this fix again?
If this fix works for you, is it OK if we move this bug to CLOSED state and
revert the rebalance-cli warning patch?
-Krutika
On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> Hello,
>
>
> Yes, i forgot to upgrade the client as well.
>
> I did the upgrade and created a new volume,
2013 Feb 18
1
Directory metadata inconsistencies and missing output ("mismatched layout" and "no dentry for inode" error)
Hi I'm running into a rather strange and frustrating bug and wondering if
anyone on the mailing list might have some insight about what might be
causing it. I'm running a cluster of two dozen nodes, where the processing
nodes are also the gluster bricks (using the SLURM resource manager). Each
node has the glusters mounted natively (not NFS). All nodes are using
v3.2.7. Each job in the
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
> scritto:
>
>> Thanks for that update.
2012 Jan 13
1
Quota problems with Gluster3.3b2
Hi everyone,
  I'm playing with Gluster3.3b2, and everything is working fine when 
uploading stuff through swift. However, when I enable quotas on Gluster, 
I randomly get permission errors. Sometimes I can upload files, most 
times I can't.
  I'm mounting the partitions with the acl flag, I've tried wiping out 
everything and starting from scratch, same result. As soon as I
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Great news.
>> Is this planned to be published in next
2011 Jun 09
1
NFS problem
Hi,
I got the same problem as Juergen,
My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0
Volume Name: poolsave
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/soft/gluster-data
Brick2: ylal2960:/soft/gluster-data
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
network.ping-timeout: 20
performance.cache-size: 512MB
2018 May 02
0
Healing : No space left on device
Oh, and *there is* space on the device where the brick's data is located.
??? /dev/mapper/fedora-home?? 942G??? 868G?? 74G? 93% /export
Le 02/05/2018 ? 11:49, Hoggins! a ?crit?:
> Hello list,
>
> I have an issue on my Gluster cluster. It is composed of two data nodes
> and an arbiter for all my volumes.
>
> After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Please use the systemtap script(
https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check
which process is invoking remove xattr calls.
It prints the pid, tid and arguments of all removexattr calls.
I have checked for these fops at the protocol/client and posix translators.
To run the script ..
1) install systemtap and dependencies.
2) install glusterfs-debuginfo
3) change the path
2019 Apr 05
2
[PATCH net v6] failover: allow name change on IFF_UP slave interfaces
On Wed, Apr 03, 2019 at 12:52:47AM -0400, Si-Wei Liu wrote:
> When a netdev appears through hot plug then gets enslaved by a failover
> master that is already up and running, the slave will be opened
> right away after getting enslaved. Today there's a race that userspace
> (udev) may fail to rename the slave if the kernel (net_failover)
> opens the slave earlier than when the
2019 Apr 05
2
[PATCH net v6] failover: allow name change on IFF_UP slave interfaces
On Wed, Apr 03, 2019 at 12:52:47AM -0400, Si-Wei Liu wrote:
> When a netdev appears through hot plug then gets enslaved by a failover
> master that is already up and running, the slave will be opened
> right away after getting enslaved. Today there's a race that userspace
> (udev) may fail to rename the slave if the kernel (net_failover)
> opens the slave earlier than when the
2017 Jun 02
2
URGENT: Update issues from 3.6.6 to 3.10.2 Accessing files via samba come up with permission denied
Hi everyone,
Is there anything else we could do to check on this problem and try to
fix it? The issue is definitively related to either the samba vfs
gluster plugin or gluster itself. I am not sure how to pin it down
futher.
I went ahead and created a new share in the samba server which is on a
local filesystem where the OS is installed, not part of gluster:
# mount | grep home
]# ls -ld /home
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know.
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Monday, July 10, 2017 8:31 AM
To: Sanoj Unnikrishnan
Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost
Ram,
    
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
      If you see it again, you can use this. I am going to send out a patch
for the code path which can lead to removal of gfid/volume-id tomorrow.
On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri at redhat.com>
wrote:
> Please use the systemtap script(https://paste.fedoraproject.org/paste/
> EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr