similar to: Access from multiple hosts where users have different uid/gid

Displaying 20 results from an estimated 800 matches similar to: "Access from multiple hosts where users have different uid/gid"

2018 Feb 08
1
How to fix an out-of-sync node?
I have a setup with 3 nodes running GlusterFS. gluster volume create myBrick replica 3 node01:/mnt/data/myBrick node02:/mnt/data/myBrick node03:/mnt/data/myBrick Unfortunately node1 seemed to stop syncing with the other nodes, but this was undetected for weeks! When I noticed it, I did a "service glusterd restart" on node1, hoping the three nodes would sync again. But this did not
2017 Sep 28
1
Clients can't connect after a server reboot (need to use volume force start)
After I rebooted my GlusterFS servers I can?t connect from clients any more. The volume is running, but I have to do a volume start FORCE on all server hosts to make it work again. I am running glusterfs 3.12.1 on Ubuntu 16.04. Is this a bug? Here are more details: "gluster volume status" returns: Status of volume: gv0 Gluster process TCP Port RDMA
2018 May 04
2
shard corruption bug
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.com> ha scritto: > It stopped being an outstanding issue at 3.12.7. I think it's now fixed. So, is not possible to extend and rebalance a working cluster with sharded data ? Can someone confirm this ? Maybe the ones that hit the bug in the past
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior in my lab with shards and RHEV/KVM data, trying to figure out if it's related. Thanks. On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote: > I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it > to settle. No problems. I am now running replica 4
2018 Mar 07
4
Kernel NFS on GlusterFS
Hello, I'm designing a 2-node, HA NAS that must support NFS. I had planned on using GlusterFS native NFS until I saw that it is being deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA support ended after 3.10 and its replacement is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite well. Are
2018 Mar 07
0
Kernel NFS on GlusterFS
Gluster does the sync part better than corosync. It's not an active/passive failover system. It more all active. Gluster handles the recovery once all nodes are back online. That requires the client tool chain to understand that a write goes to all storage devices not just the active one. 3.10 is a long term support release. Upgrading to 3.12 or 4 is not a significant issue once a replacement
2018 May 04
0
shard corruption bug
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it to settle. No problems. I am now running replica 4 (preparing to remove a brick and host to replica 3). On Fri, 2018-05-04 at 14:24 +0000, Gandalf Corvotempesta wrote: > Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail. > com> > ha scritto: > > It stopped being an outstanding
2017 Oct 02
0
Clients can't connect after a server reboot (need to use volume force start)
On Thu, Sep 28, 2017 at 8:49 AM, Frizz <frizzthecat at googlemail.com> wrote: > After I rebooted my GlusterFS servers I can?t connect from clients any > more. > > The volume is running, but I have to do a volume start FORCE on all server > hosts to make it work again. > > I am running glusterfs 3.12.1 on Ubuntu 16.04. > > Is this a bug? > Have you been able to
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/ The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link. On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote: >What shard corruption bug? bugzilla url? I'm running into some odd >behavior >in my lab with shards and RHEV/KVM data, trying to figure out if it's >related. >
2018 May 30
1
peer detach fails
All, I added a third peer for a arbiter brick host to replica 2 cluster. Then I realized I can't use it since it has no infiniband like the other two hosts (infiniband and ethernet for clients). So I removed the new arbiter bricks from all of the volumes. However, I can't detach the peer as it keeps saying there are bricks it hosts. Nothing in volume status or info shows that host to be
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:22 Jim Kinney <jim.kinney at gmail.com> ha scritto: > You can change the replica count. Add a fourth server, add it's brick to existing volume with gluster volume add-brick vol0 replica 4 newhost:/path/to/brick This doesn't add space, but only a new replica, increasing the number of copies
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.com> ha scritto: > correct. a new server will NOT add space in this manner. But the original Q was about rebalancing after adding a 4th server. If you are using distributed/replication, then yes, a new server with be adding a portion of it's space to add more space to the cluster. Wait, in a distribute-replicate,
2018 May 09
2
Some more questions
Ok, some more question as I'm still planning our SDS (but I'm prone to use LizardFS, gluster is too inflexible) Let's assume a replica 3: 1) currently, is not possbile to add a single server and rebalance like any order SDS (Ceph, Lizard, Moose, DRBD, ....), right ? In replica 3, I have to add 3 new servers 2) The same should be by add disks on spare slots on existing servers.
2018 Jan 26
2
Replacing a third data node with an arbiter one
On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.com> wrote: > > > On 01/24/2018 07:20 PM, Hoggins! wrote: > > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e >
2018 Jan 12
1
Reading over than the file size on dispersed volume
Hi All, I'm using gluster as dispersed volume and I send to ask for very serious thing. I have 3 servers and there are 9 bricks. My volume is like below. ------------------------------------------------------ Volume Name: TEST_VOL Type: Disperse Volume ID: be52b68d-ae83-46e3-9527-0e536b867bcc Status: Started Snapshot Count: 0 Number of Bricks: 1 x (6 + 3) = 9 Transport-type: tcp Bricks:
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` . It means (or at least intended to mean) that out of the 3 bricks, 1 brick is the arbiter. There has been some feedback while implementing arbiter support in glusterd2 for glusterfs-4.0 that we should change this to `replica 2 arbiter
2009 Apr 03
3
data.frame to array?
I have a list of data.frames > str(bins) List of 19217 $ 100026:'data.frame': 1 obs. of 6 variables: ..$ Sku : chr "100026" ..$ Bin : chr "T149C" ..$ Count: int 108 ..$ X : int 20 ..$ Y : int 149 ..$ Z : chr "3" $ 100030:'data.frame': 1 obs. of 6 variables: ....... As you can see one 'column' is
2017 Oct 05
2
Status of PBQP register allocator?
Hi all, I was wondering about whether the PBQP register allocator is likely to be maintained in the future. It's proving to be a nice way to encode some instruction encoding constraints for an out-of-tree backend we have, but there's concern about it being abandoned or bitrotting in the future. Also, if PBQP is likely to lapse out of regular maintenance in the future, is there a simple
2014 May 20
4
Disable login at boot
Hi folks, Can anyone tell me the file / edit location to disable the login/password prompt at boot? We are configuring some machines to be administered remotely and headless. Thanks, KC
2000 May 22
1
Off Topic - Virus
Hello, I know this is off topic, but I've come to the conclusion that the only people who can answer my question is other Samba users... I would like to deploy on our network a centrally administered antivirus program. We're running Win98 machines doing NT type logins to Samba running on a RedHat 6.1 box. It seems that as I have researched antivirus programs which can be centrally