Displaying 20 results from an estimated 200000 matches similar to: "does your kdump work?"
2019 Mar 28
0
How to specify kernel version when restart kdump
On Thu, Mar 28, 2019 at 9:24 AM Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
>
>
> 1) In CentOS 6 we have the classical SysV service
> file: /etc/rc.d/init.d/kdump
>
> Supposing you have just installed 2.6.32-642.13.1.el6.x86_64 kernel
>
> [snip]
>
> and at the end it runs this command if it doesn't find one:
> $MKDUMPRD $kdump_initrd
2017 Sep 07
0
peer rejected but connected
Thank you for the acknowledgement.
On Mon, Sep 4, 2017 at 6:39 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> yes, I see things got lost in transit, I said before:
>
> I did from first time and now not rejected.
> now I'm restarting fourth(newly added) peer's glusterd
> and.. it seems to work. <- HERE! (even though....
>
> and then I asked:
>
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue.
Info file on node 10.5.6.17 consist of an additional property
"tier-enabled" which is not present in info file from other 3 nodes, hence
when gluster peer probe call is made, in order to maintain consistency
across the cluster cksum is compared. In this
case as both files are different leading to different cksum, causing state
in
2017 Sep 04
0
peer rejected but connected
Executing "gluster volume set all cluster.op-version <op-version>"on all
the existing nodes will solve this problem.
If issue still persists please provide me following logs (working-cluster
+ newly added peer)
1. glusterd.info file from /var/lib/glusterd from all nodes
2. glusterd.logs from all nodes
3. info file from all the nodes.
4. cmd-history from all the nodes.
Thanks
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick,
If one of the self heal process is down, will the statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth
2016 Nov 04
2
mailing list mail from @yahoo addresses
[extracted from "Re: [CentOS] dnf and failing epel" message chain.]
> From: lejeczek peljasz at yahoo.co.uk
> Date: Fri Nov 4 13:39:40 UTC 2016
>> Date: Friday, November 04, 2016 08:51:07 -0400
>> From: Jonathan Billings <billings at negate.org>
>>
>>> On Fri, Nov 04, 2016 at 12:30:02PM +0000, lejeczek wrote:
>>>
>>> ps. I
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2019 Mar 28
2
How to specify kernel version when restart kdump
On Thu, Mar 28, 2019 at 6:55 AM wuzhouhui <wuzhouhui14 at mails.ucas.ac.cn>
wrote:
> > -----Original Messages-----
> > From: "Benjamin Hauger" <hauger at noao.edu>
> > Sent Time: 2019-03-28 01:31:40 (Thursday)
> > To: wuzhouhui <wuzhouhui14 at mails.ucas.ac.cn>, centos at centos.org
> > Cc:
> > Subject: Re: [CentOS] How to specify
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2018 May 02
1
unable to remove ACLs
On 01/05/18 23:59, Vijay Bellur wrote:
>
>
> On Tue, May 1, 2018 at 5:46 AM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
>
> hi guys
>
> I have a simple case of:
> $ setfacl -b
> not working!
> I copy a folder outside of autofs mounted gluster vol,
> to a regular fs and removing acl works as
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in
"/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with
glusterd.logs and command-history.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi fellas,
> same old same
> in log of the probing peer I see:
> ...
> 2017-08-29
2023 Apr 19
2
bash test ?
On 19/04/2023 08:04, wwp wrote:
> Hello lejeczek,
>
>
> On Wed, 19 Apr 2023 07:50:29 +0200 lejeczek via CentOS <centos at centos.org> wrote:
>
>> Hi guys.
>>
>> I cannot wrap my hear around this:
>>
>> -> $ unset _Val; test -z ${_Val}; echo $?
>> 0
>> -> $ unset _Val; test -n ${_Val}; echo $?
>> 0
>> -> $
2017 Aug 29
3
peer rejected but connected
hi fellas,
same old same
in log of the probing peer I see:
...
2017-08-29 13:36:16.882196] I [MSGID: 106493]
[glusterd-handler.c:3020:__glusterd_handle_probe_query]
0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0,
op_errno: 0, ret: 0
[2017-08-29 13:36:16.904961] I [MSGID: 106490]
[glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req]
0-glusterd: Received probe from uuid:
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol heal $_vol statistics
> Gathering crawl statistics on volume GROUP-WORK
2007 Sep 20
2
Re: [PATCH] kexec/kdump: statically allocate xen_phys_cpus
You posted to xen-ia64-devel & xen-ia64 (deleted the wrong word?)
rather than xen-devel so I added that to the CC.
On Thu, 2007-09-20 at 13:38 +0900, Simon Horman wrote:
> On IA64 alloc_bootmem_low() can''t be called this early.
>
> Before alloc_bootmem_low() can be called init_bootmem(), which is called in
> find_memory(). However xen_machine_kexec_setup_resources() is
2017 Sep 04
2
heal info OK but statistics not working
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect - gluster inability to cope with a situation where
one peer(which is not even a brick for a single vol on