Displaying 20 results from an estimated 24 matches for "glust".
Did you mean:
clust
2017 Dec 11
2
active/active failover
Dear all,
I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
So my question is: can...
2017 Dec 11
0
active/active failover
Hi Stefan,
I think what you propose will work, though you should test it thoroughly.
I think more generally, "the GlusterFS way" would be to use 2-way
replication instead of a distributed volume; then you can lose one of your
servers without outage. And re-synchronize when it comes back up.
Chances are if you weren't using the SAN volumes; you could have purchased
two servers each with enough disk to make...
2017 Dec 12
1
active/active failover
Hi Alex,
Thank you for the quick reply!
Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick" changes. I understand the mod...
2018 May 22
0
@devel - Why no inotify?
how about gluste's own client(s)?
You mount volume (locally to the server) via autofs/fstab
and watch for inotify on that mountpoing(or path inside it).
That is something I expected was out-of-box.
On 03/05/18 17:44, Joe Julian wrote:
> There is the ability to notify the client already. If you
> devel...
2018 May 03
3
@devel - Why no inotify?
...plica volumes
>(disperse probably wouldn't, you wouldn't get all events or you'd need
>to make sure your inotify runs on every brick). Then from there you
>could notify your clients, not ideal, but that should work.
>
>I agree that adding support for inotify directly into gluster would be
>great, but I'm not sure gluster has any mechanics for notifying clients
>of changes since most of the logic is in the client, as I understand
>it.
>
>On Thu, May 03, 2018 at 04:33:30PM +0100, lejeczek wrote:
>> hi guys
>>
>> will we have gluster wit...
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth unavailable peer is still up & running but is
inaccessible to oth...
2017 Sep 04
0
heal info OK but statistics not working
...statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command which succeeded, so now cluster comprise of
> three peers
> 3) Self-heal daemon (for some reason) does not start(with an attempt to
> restart glusted) on the peer which probed that fourth peer.
> 4) fourth unavailable peer is still up & running but is ina...
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the
performance I get (note use a file size > amount of RAM on client and
server systems, 13GB in this case) :
4k block size :
111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y
pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds
pir4: Read te...
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was cr...
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
&g...
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol heal $_vo...
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connec...
2018 Apr 10
0
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was create...
2017 Sep 04
2
heal info OK but statistics not working
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect - gluster inability to cope with a situation where
one peer(which is not even a brick for a single vol on the
cl...
2010 Apr 22
1
Transport endpoint not connected
Hey guys,
I've recently implemented gluster to share webcontent read-write between
two servers.
Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
Fuse : 2.7.2-1ubuntu2.1
Platform : ubuntu 8.04LTS
I used the following command to generate my configs:
/usr/local/bin/glusterfs-volgen --name repstore1 --raid...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
These errors are because not having glusternw assigned to the correct
interface. Once you attach that these errors should go away. This has
nothing to do with the problem you are seeing.
sahina any idea about engine not showing the correct volume info ?
On Mon, Jul 24, 2017 at 7:30 PM, yayo (j) <jaganz at gmail.com> wrote:
> H...
2017 May 29
1
Failure while upgrading gluster to 3.10.1
...t platform.sh> wrote:
>
>
> On Thu, May 25, 2017 at 9:54 PM, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>
>>
>> On Thu, 25 May 2017 at 19:11, Pawan Alwandi <pawan at platform.sh> wrote:
>>
>>> Hello Atin,
>>>
>>> Yes, glusterd on other instances are up and running. Below is the
>>> requested output on all the three hosts.
>>>
>>> Host 1
>>>
>>> # gluster peer status
>>> Number of Peers: 2
>>>
>>> Hostname: 192.168.0.7
>>> Uuid: 5ec54b4f-...
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi,
UI refreshed but problem still remain ...
No specific error, I've only these errors but I've read that there is no
problem if I have this kind of errors:
2017-07-24 15:53:59,823+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] START,
GlusterServersListVDSCommand(HostName
= node01.localdomain.local, VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417
2017-07-24 15:54:01,066+02 I...
2017 Jun 20
0
gluster peer probe failing
Hi,
I am able to recreate the issue and here is my RCA.
Maximum value i.e 32767 is being overflowed while doing manipulation on it
and it was previously not taken care properly.
Hence glusterd was crashing with SIGSEGV.
Issue is being fixed with "
https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported
as well.
Thanks
Gaurav
On Tue, Jun 20, 2017 at 6:43 AM, Gaurav Yadav <gyadav at redhat.com> wrote:
> Hi,
>
> I have tried on my host by...
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin,
I've gotten around to this and was able to get upgrade done using 3.7.0
before moving to 3.11. For some reason 3.7.9 wasn't working well.
On 3.11 though I notice that gluster/nfs is really made optional and
nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha
on new clusters but would like to have glusterfs-gnfs on existing clusters
so a seamless upgrade without downtime is possible.
[2017-07-03 06:43:25.511893] I [MSGID: 106600]
[glusterd-nfs-sv...