Displaying 20 results from an estimated 24 matches for "glusted".
Did you mean:
gluster
2017 Dec 11
2
active/active failover
Dear all,
I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
So my question is: can I really use glusterfs to do failover in the way described
2017 Dec 11
0
active/active failover
Hi Stefan,
I think what you propose will work, though you should test it thoroughly.
I think more generally, "the GlusterFS way" would be to use 2-way
replication instead of a distributed volume; then you can lose one of your
servers without outage. And re-synchronize when it comes back up.
Chances are if you weren't using the SAN volumes; you could have purchased
two servers
2017 Dec 12
1
active/active failover
Hi Alex,
Thank you for the quick reply!
Yes, I'm aware that using ?plain? hardware with replication is more what GlusterFS is for. I cannot talk about prices where in detail, but for me, it evens more or less out. Moreover, I have more SAN that I'd rather re-use (because of Lustre) than buy new hardware. I'll test more to understand what precisely "replace-brick"
2018 May 22
0
@devel - Why no inotify?
how about gluste's own client(s)?
You mount volume (locally to the server) via autofs/fstab
and watch for inotify on that mountpoing(or path inside it).
That is something I expected was out-of-box.
On 03/05/18 17:44, Joe Julian wrote:
> There is the ability to notify the client already. If you
> developed against libgfapi you could do it (I think).
>
> On May 3, 2018 9:28:43 AM
2018 May 03
3
@devel - Why no inotify?
There is the ability to notify the client already. If you developed against libgfapi you could do it (I think).
On May 3, 2018 9:28:43 AM PDT, lemonnierk at ulrar.net wrote:
>Hey,
>
>I thought about it a while back, haven't actually done it but I assume
>using inotify on the brick should work, at least in replica volumes
>(disperse probably wouldn't, you wouldn't get
2017 Sep 04
2
heal info OK but statistics not working
...ed from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth unavailable peer is still up & running but is
inaccessible to other peers for network is disconnected,
segmented. That peer's gluster status show peer is still in
the cluster.
5) So, fourth peer's gluster(nor other processes) sta...
2017 Sep 04
0
heal info OK but statistics not working
...the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command which succeeded, so now cluster comprise of
> three peers
> 3) Self-heal daemon (for some reason) does not start(with an attempt to
> restart glusted) on the peer which probed that fourth peer.
> 4) fourth unavailable peer is still up & running but is inaccessible to
> other peers for network is disconnected, segmented. That peer's gluster
> status show peer is still in the cluster.
> 5) So, fourth peer's gluster(nor othe...
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the
performance I get (note use a file size > amount of RAM on client and
server systems, 13GB in this case) :
4k block size :
111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y
pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds
pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds
testing from 8k -
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol heal $_vol statistics
> Gathering crawl statistics on volume GROUP-WORK
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 Apr 10
0
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe
2017 Sep 04
2
heal info OK but statistics not working
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect - gluster inability to cope with a situation where
one peer(which is not even a brick for a single vol on
2010 Apr 22
1
Transport endpoint not connected
Hey guys,
I've recently implemented gluster to share webcontent read-write between
two servers.
Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
Fuse : 2.7.2-1ubuntu2.1
Platform : ubuntu 8.04LTS
I used the following command to generate my configs:
/usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
These errors are because not having glusternw assigned to the correct
interface. Once you attach that these errors should go away. This has
nothing to do with the problem you are seeing.
sahina any idea about engine not showing the correct volume info ?
On Mon, Jul 24, 2017 at 7:30 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi,
>
> UI refreshed but problem still remain ...
>
2017 May 29
1
Failure while upgrading gluster to 3.10.1
Sorry for big attachment in previous mail...last 1000 lines of those logs
attached now.
On Mon, May 29, 2017 at 4:44 PM, Pawan Alwandi <pawan at platform.sh> wrote:
>
>
> On Thu, May 25, 2017 at 9:54 PM, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>
>>
>> On Thu, 25 May 2017 at 19:11, Pawan Alwandi <pawan at platform.sh> wrote:
>>
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi,
UI refreshed but problem still remain ...
No specific error, I've only these errors but I've read that there is no
problem if I have this kind of errors:
2017-07-24 15:53:59,823+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] START,
GlusterServersListVDSCommand(HostName
= node01.localdomain.local,
2017 Jun 20
0
gluster peer probe failing
Hi,
I am able to recreate the issue and here is my RCA.
Maximum value i.e 32767 is being overflowed while doing manipulation on it
and it was previously not taken care properly.
Hence glusterd was crashing with SIGSEGV.
Issue is being fixed with "
https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported
as well.
Thanks
Gaurav
On Tue, Jun 20, 2017 at 6:43 AM, Gaurav
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin,
I've gotten around to this and was able to get upgrade done using 3.7.0
before moving to 3.11. For some reason 3.7.9 wasn't working well.
On 3.11 though I notice that gluster/nfs is really made optional and
nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha
on new clusters but would like to have glusterfs-gnfs on existing clusters
so a seamless upgrade