similar to: NFS versus Fuse file locking problem (NFS works, fuse doesn't...)

Displaying 20 results from an estimated 3000 matches similar to: "NFS versus Fuse file locking problem (NFS works, fuse doesn't...)"

2017 Aug 24
3
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
Hi This is gluster 3.8.4. Volume options are out of the box. Sharding is off (and I don't think enabling it would matter) I haven't done much performance tuning. For one thing, using a simple script that just creates files I can easily flood the network, so I don't expect a performance issue. The problem we see is that after a certain time the fuse clients completely stop accepting
2017 Aug 24
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
Hi Krist, What are your volume options on that setup? Have you tried tuning it for the kind of workload and files size you have? I would definitely do some tests with feature.shard=on/off first. If shard is on, try playing with features.shard-block-size. Do you have jumbo frames (MTU=9000) enabled across the switch and nodes? if you have concurrent clients writing/reading, it could be beneficial
2017 Aug 25
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
On Thu, Aug 24, 2017 at 9:01 AM, Krist van Besien <krist at redhat.com> wrote: > Hi > This is gluster 3.8.4. Volume options are out of the box. Sharding is off > (and I don't think enabling it would matter) > > I haven't done much performance tuning. For one thing, using a simple > script that just creates files I can easily flood the network, so I don't >
2017 Jun 02
1
File locking...
Hi all, A few questions. - Is POSIX locking enabled when using the native client? I would assume yes. - What other settings/tuneables exist when it comes to file locking? Krist -- Vriendelijke Groet | Best Regards | Freundliche Gr??e | Cordialement ------------------------------ Krist van Besien | Senior Architect | Red Hat EMEA Cloud Practice | RHCE | RHCSA Open Stack @: krist at
2017 May 31
2
"Another Transaction is in progres..."
Hi all, I am trying to do trivial things, like setting quota, or just querying the status and keep getting "Another transaction is in progres for <some volume>" These messages pop up, then disappear for a while, then pop up again... What do these messages mean? How do I figure out which "transaction" is meant here, and what do I do about it? Krist -- Vriendelijke
2017 Aug 30
4
GlusterFS as virtual machine storage
Ciao Gionatan, I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide storage for oVirt 4.x and I have had no major issues so far. I have done online upgrades a couple of times, power losses, maintenance, etc with no issues. Overall, it is very resilient. Important thing to keep in mind is your network, I run the Gluster nodes on a redundant network using bonding mode 1 and I have
2017 Jul 04
2
I need a sanity check.
2017 Aug 30
0
GlusterFS as virtual machine storage
There has ben a bug associated to sharding that led to VM corruption that has been around for a long time (difficult to reproduce I understood). I have not seen reports on that for some time after the last fix, so hopefully now VM hosting is stable. 2017-08-30 3:57 GMT+02:00 Everton Brogliatto <brogliatto at gmail.com>: > Ciao Gionatan, > > I run Gluster 3.10.x (Replica 3 arbiter
2017 Aug 30
3
GlusterFS as virtual machine storage
Solved as to 3.7.12. The only bug left is when adding new bricks to create a new replica set, now sure where we are now on that bug but that's not a common operation (well, at least for me). On Wed, Aug 30, 2017 at 05:07:44PM +0200, Ivan Rossi wrote: > There has ben a bug associated to sharding that led to VM corruption that > has been around for a long time (difficult to reproduce I
2017 Jun 01
0
"Another Transaction is in progres..."
Thanks for the suggestion, this solved it for us, and we probably found the cause as well. We had performance co-pilot running and it was continously enabling profiling on volumes... We found the reference to the node that had the lock, and restarted glusterd on that node, and all went well from there on. Krist On 31 May 2017 at 15:56, Vijay Bellur <vbellur at redhat.com> wrote: >
2006 Oct 20
4
tcpsnoop problem
Hello, I have the following problem: On a solaris 10 machine, with 5 "zones" there is a process that is talking to the wrong db server. I need to find out which process this is, so I can analize this further. I have tried to doing this using tcpsnoop from the DTrace toolkit, but without success. This is what I''ve done. First I started tcpsnoop, dumping it''s output
2017 Aug 22
0
Performance testing with sysbench...
Hi all, I'm doing some performance test... If I test a simple sequential write using dd I get a thorughput of about 550 Mb/s. When I do a sequential write test using sysbench this drops to about 200. Is this due to the way sysbench tests? Or has in this case the performance of sysbench itself become the bottleneck? Krist -- Vriendelijke Groet | Best Regards | Freundliche Gr??e |
2017 Jul 26
0
Heketi and Geo Replication.
Hello, Is it possible to set up a Heketi Managed gluster cluster in one datacenter, and then have geo replication for all volumes to a second cluster in another datacenter? I've been looking at that, but haven't really found a recipe/solution for this. Ideally what I want is that when a volume is created in cluster1, that a slave volume is automatically created in cluster2, and
2017 Jun 01
0
Disconnected gluster node things it is still connected...
Hi all, Trying to do some availability testing. We have three nodes: node1, node2, node3. Volumes are all replica 2, across all three nodes. As a test we disconnected node1, buy removing the vlan tag for that host on the switch it is connected to. As a result node2 and node3 now show node1 in disconnected status, and show the volumes as degraded. This is ecpected. However logging in to node1
2017 Oct 19
0
Trying to remove a brick (with heketi) fails...
Hello, I have a gluster cluster with 4 nodes, that is managed using heketi. I want to test the removeal of one node. We have several volumes on it, some with rep=2, others with rep=3. I get the following error: [root at CTYI1458 .ssh]# heketi-cli --user admin --secret "******" node remove 749850f8e5fd23cf6a224b7490499659 Error: Failed to remove device, error: Cannot replace brick
2017 Aug 26
0
GlusterFS as virtual machine storage
Il 26-08-2017 07:38 Gionatan Danti ha scritto: > I'll surely give a look at the documentation. I have the "bad" habit > of not putting into production anything I know how to repair/cope > with. > > Thanks. Mmmm, this should read as: "I have the "bad" habit of not putting into production anything I do NOT know how to repair/cope with" Really :D
2017 Aug 31
2
Manually delete .glusterfs/changelogs directory ?
Hi Mabi, If you will not use that geo-replication volume session again, I believe it is safe to manually delete the files in the brick directory using rm -rf. However, the gluster documentation specifies that if the session is to be permanently deleted, this is the command to use: gluster volume geo-replication gv1 snode1::gv2 delete reset-sync-time
2017 Aug 26
2
GlusterFS as virtual machine storage
Il 26-08-2017 01:13 WK ha scritto: > Big +1 on what was Kevin just said.? Just avoiding the problem is the > best strategy. Ok, never run Gluster with anything less than a replica2 + arbiter ;) > However, for the record,? and if you really, really want to get deep > into the weeds on the subject, then the? Gluster people have docs on > Split-Brain recovery. > >
2017 Aug 30
0
Manually delete .glusterfs/changelogs directory ?
Hi, has anyone any advice to give about my question below? Thanks! > -------- Original Message -------- > Subject: Manually delete .glusterfs/changelogs directory ? > Local Time: August 16, 2017 5:59 PM > UTC Time: August 16, 2017 3:59 PM > From: mabi at protonmail.ch > To: Gluster Users <gluster-users at gluster.org> > > Hello, > > I just deleted (permanently)
2017 Aug 16
2
Manually delete .glusterfs/changelogs directory ?
Hello, I just deleted (permanently) my geo-replication session using the following command: gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete and noticed that the .glusterfs/changelogs on my volume still exists. Is it safe to delete the whole directly myself with "rm -rf .glusterfs/changelogs" ? As far as I understand the CHANGELOG.* files are only needed