Displaying 20 results from an estimated 10000 matches similar to: "how does gluster decide which connection to use?"
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick,
If one of the self heal process is down, will the statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2019 Mar 13
1
vlan tagging for openVSwitch
hi everyone,
I'm trying to get vlans tagged in libvirt as my switch's end (yes
traffic will be leaving the host and into network switches) allows only
tagged vlans.
But with network as such:
...
<portgroup name='vlan-55'>
<vlan trunk='yes'>
<tag id='55'/>
</vlan>
</portgroup>
</network>
and guest as:
2017 Sep 07
0
peer rejected but connected
Thank you for the acknowledgement.
On Mon, Sep 4, 2017 at 6:39 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> yes, I see things got lost in transit, I said before:
>
> I did from first time and now not rejected.
> now I'm restarting fourth(newly added) peer's glusterd
> and.. it seems to work. <- HERE! (even though....
>
> and then I asked:
>
2017 Sep 12
3
man pages - incomplete
@devel
hi, I wonder who takes care of man pages when it comes to rpms?
I'd like to file a bugzilla report and would like to make
sure it's packages mainainer(s) are responsible for
incomplete man pages.
Often man pages are neglected by authors, too often, and man
is, should always be "the place", we users/admins should not
have to sroogle for info almost every time.
m.
2017 Jul 24
0
vol status detail - times out?
Yes it could as depending on number of bricks there might be too many brick
ops involved. This is the reason we introduced --timeout option in CLI
which can be used to have a larger time out value. However this fix is
available from release-3.9 onwards.
On Mon, Jul 24, 2017 at 3:54 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi fellas
>
> would you know what could be the
2017 Sep 04
0
peer rejected but connected
Executing "gluster volume set all cluster.op-version <op-version>"on all
the existing nodes will solve this problem.
If issue still persists please provide me following logs (working-cluster
+ newly added peer)
1. glusterd.info file from /var/lib/glusterd from all nodes
2. glusterd.logs from all nodes
3. info file from all the nodes.
4. cmd-history from all the nodes.
Thanks
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol heal $_vol statistics
> Gathering crawl statistics on volume GROUP-WORK
2017 Nov 01
0
Gluster 3.12.1 OOM issue with volume which stores large amount of files
Hi,
I have been struggling with this OOM issue and so far nothing has helped.
So we are running 10TB archive volume which stores bit more than 7M files.
The problem is that due to the way we are managing this archive, we are
forced to run daily "full scans" of file system to discover new
uncompressed files. I know, i know, this is not optimal solution but it is
as it is right now. So
2012 Apr 25
1
dbench & similar - as a valid benchmark
hi everybody
would a tool such as dbench be a valid bechmark for gluster?
and, most importantly, is there any formula to estimate raw
fs to gluster performance ratio for different setups?
for instance:
having a replicated volume, two bricks, fuse mountpoint to
volume via non-congested 1Gbps
or even
a volume on single brick with fuse client mountpoing locally
what percentage/fraction of raw
2017 Sep 04
2
heal info OK but statistics not working
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect - gluster inability to cope with a situation where
one peer(which is not even a brick for a single vol on
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
Apart from above info, please provide glusterd logs, cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9 vols.
> What I see, unfortunately, is one brick
2017 Sep 12
2
one brick one volume process dies?
hi everyone
I have 3-peer cluster with all vols in replica mode, 9 vols.
What I see, unfortunately, is one brick fails in one vol,
when it happens it's always the same vol on the same brick.
Command: gluster vol status $vol - would show brick not online.
Restarting glusterd with systemclt does not help, only
system reboot seem to help, until it happens, next time.
How to troubleshoot this
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue.
Info file on node 10.5.6.17 consist of an additional property
"tier-enabled" which is not present in info file from other 3 nodes, hence
when gluster peer probe call is made, in order to maintain consistency
across the cluster cksum is compared. In this
case as both files are different leading to different cksum, causing state
in
2018 Mar 08
0
gluster for home directories?
Hi Rik,
Nice clarity and detail in the description. Thanks!
inline...
On Wed, Mar 7, 2018 at 8:29 PM, Rik Theys <Rik.Theys at esat.kuleuven.be>
wrote:
> Hi,
>
> We are looking into replacing our current storage solution and are
> evaluating gluster for this purpose. Our current solution uses a SAN
> with two servers attached that serve samba and NFS 4. Clients connect to