Displaying 20 results from an estimated 4000 matches similar to: "Questions about snapshots"
2024 Jan 23
1
Questions about snapshots
Hi Stefan,
I'm not sure which doc are you referring to? It would help if you can share
it.
I would start looking at the barrier translator in
xlators/features/barrier. Initially it is designed to block all the ops
that modify the underlying file system from acknowledging the client. The
list of file system operations that needs to be blocked should be listed
here
2024 Jan 23
1
Questions about snapshots
Hi Varun,
Am 23.01.24 um 01:37 schrieb Varun:
> I'm not sure which doc are you referring to? It would help if you can share
> it.
Here,
https://docs.gluster.org/en/main/Administrator-Guide/Managing-Snapshots/#pre-requisites
and all the places where this page is copied to ;-)
Thank's for the link to the source code, but source code is no
documentation ;-)
-------------- next
2013 Jan 08
1
help me, glusterfs 3.3 doen't support fopen-keep-cache?
Dear gluster experts,
I search through glusterfs 3.3 source tree and can't find any fuse
open option FOPEN_KEEP_CACHE related code. Does this mean that
glusterfs 3.3 doen't support fuse keep cache feature? However I did
find keep cache code in mainline. My question is:
1 does glusterfs 3.3 support page cache?
2 if not what is the best practice to improve performance if a file is
2017 Jan 03
2
shadow_copy and glusterfs not working
Hello,
we are trying to configure a CTDB-Cluster with Glusterfs. We are using
Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
volume to use gluster-snapshots.
Then we configured the first share without using shadow_copy2 and
everything was working fine.
Then we added the shadow_copy2 parameters, when we did a "smbclient" we
got the following message:
root at
2019 Apr 05
2
[PATCH net v6] failover: allow name change on IFF_UP slave interfaces
On Wed, Apr 03, 2019 at 12:52:47AM -0400, Si-Wei Liu wrote:
> When a netdev appears through hot plug then gets enslaved by a failover
> master that is already up and running, the slave will be opened
> right away after getting enslaved. Today there's a race that userspace
> (udev) may fail to rename the slave if the kernel (net_failover)
> opens the slave earlier than when the
2019 Apr 05
2
[PATCH net v6] failover: allow name change on IFF_UP slave interfaces
On Wed, Apr 03, 2019 at 12:52:47AM -0400, Si-Wei Liu wrote:
> When a netdev appears through hot plug then gets enslaved by a failover
> master that is already up and running, the slave will be opened
> right away after getting enslaved. Today there's a race that userspace
> (udev) may fail to rename the slave if the kernel (net_failover)
> opens the slave earlier than when the
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64
client and an x86 client. Weirdly the client logs were almost identical.
Here's the ppc64 gluster client log of attempting to create a folder...
-------------
[2017-09-20 13:34:23.344321] D
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2011 Aug 24
1
Input/output error
Hi, everyone.
Its nice meeting you.
I am poor at English....
I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want
to change from gluster mount to nfs mount.
I have installed GlusterFS 3.2.1 one week ago,and replication 2 server.
OS:CentOS5.5 64bit
RPM:glusterfs-core-3.2.1-1
glusterfs-fuse-3.2.1-1
command
gluster volume create syncdata replica 2 transport tcp
2017 Nov 13
2
What is the difference between FORGET and UNLINK fops
Hi,
Can I get a brief description of all the FOPS in gluster or the location of
the source code file so that I will try to get an understanding myself?
Few FOPS I'm not clear like FORGET, UNLINK, FLUSH, LOOKUP
Or is there a way I can tunnel through the FOPS that that are happening in
the background for each operation? I have tried this to find from a brick
logfile in TRACE mode, but there
2017 Nov 13
0
What is the difference between FORGET and UNLINK fops
Filtering the brick logs in TRACE mode with rpcsvc.c does show the FOPS.
>From this, I've realized that LOOKUP is actually dns lookup. This actually
differs from NFS lookup operation. Please correct me if I'm wrong.
Regards,
Jeevan.
On Nov 13, 2017 9:40 PM, "Jeevan Patnaik" <g1patnaik at gmail.com> wrote:
> Hi,
>
> Can I get a brief description of all the
2018 May 02
3
Healing : No space left on device
Hello list,
I have an issue on my Gluster cluster. It is composed of two data nodes
and an arbiter for all my volumes.
After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this is
what I get :
??? - on node 1, volumes won't start, and glusterd.log shows a lot of :
??? ??? [2018-05-02 09:46:06.267817] W
[glusterd-locks.c:843:glusterd_mgmt_v3_unlock]
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news.
Is this planned to be published in next release?
Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
scritto:
> Thanks for that update. Very happy to hear it ran fine without any issues.
> :)
>
> Yeah so you can ignore those 'No such file or directory' errors. They
> represent a transient state where DHT in the client process
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi,
Did you get a chance to verify this fix again?
If this fix works for you, is it OK if we move this bug to CLOSED state and
revert the rebalance-cli warning patch?
-Krutika
On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> Hello,
>
>
> Yes, i forgot to upgrade the client as well.
>
> I did the upgrade and created a new volume,
2017 Aug 28
2
GFID attir is missing after adding large amounts of data
Hi Cluster Community,
we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation.
The version is 3.8.11 on CentOS 7.
The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare.
After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day.
Also there seem to be problems
2012 Jan 13
1
Quota problems with Gluster3.3b2
Hi everyone,
I'm playing with Gluster3.3b2, and everything is working fine when
uploading stuff through swift. However, when I enable quotas on Gluster,
I randomly get permission errors. Sometimes I can upload files, most
times I can't.
I'm mounting the partitions with the acl flag, I've tried wiping out
everything and starting from scratch, same result. As soon as I
2013 Feb 18
1
Directory metadata inconsistencies and missing output ("mismatched layout" and "no dentry for inode" error)
Hi I'm running into a rather strange and frustrating bug and wondering if
anyone on the mailing list might have some insight about what might be
causing it. I'm running a cluster of two dozen nodes, where the processing
nodes are also the gluster bricks (using the SLURM resource manager). Each
node has the glusters mounted natively (not NFS). All nodes are using
v3.2.7. Each job in the
2017 Aug 29
0
GFID attir is missing after adding large amounts of data
This is strange, a couple of questions:
1. What volume type is this? What tuning have you done? gluster v info output would be helpful here.
2. How big are your bricks?
3. Can you write me a quick reproducer so I can try this in the lab? Is it just a single multi TB file you are untarring or many? If you give me the steps to repro, and I hit it, we can get a bug open.
4. Other than
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
> scritto:
>
>> Thanks for that update.
2017 Dec 06
1
Crash in glusterd!!!
Any suggestion....
On Dec 6, 2017 11:51, "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com> wrote:
> Hi Team,
>
> We are getting the crash in glusterd after start of it. When I tried to
> debug in brick logs we are getting below errors:
>
> [2017-12-01 14:10:14.684122] E [MSGID: 100018]
> [glusterfsd.c:1960:glusterfs_pidfile_update] 0-glusterfsd: pidfile
>
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Great news.
>> Is this planned to be published in next