Displaying 20 results from an estimated 10000 matches similar to: "AMD epyc/naples"
2018 May 02
1
unable to remove ACLs
On 01/05/18 23:59, Vijay Bellur wrote:
>
>
> On Tue, May 1, 2018 at 5:46 AM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
>
> hi guys
>
> I have a simple case of:
> $ setfacl -b
> not working!
> I copy a folder outside of autofs mounted gluster vol,
> to a regular fs and removing acl works as
2018 May 01
2
unable to remove ACLs
hi guys
I have a simple case of:
$ setfacl -b
not working!
I copy a folder outside of autofs mounted gluster vol, to a
regular fs and removing acl works as expected.
Inside mounted gluster vol I seem to be able to
modify/remove ACLs for users, groups and masks but that one
simple, important thing does not work.
It is also not the case of default ACLs being enforced from
the parent, for I
2020 Jan 01
0
KVM Random Reboots AMD EPYC Server
> our new Server with AMD EPYC and super micro board reboots ramdonly.
> There is no error message before the reboot in /var/log/messages.
Anything in the hardware logs of the server like memory error or so? Any
watchdog on the servers acting bad?
We run CentOS 7 and KVM on AMD Opteron and AMD EPYC servers without issues.
Regards,
Simon
>
> we are running 2 Server with VMWare
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue.
Info file on node 10.5.6.17 consist of an additional property
"tier-enabled" which is not present in info file from other 3 nodes, hence
when gluster peer probe call is made, in order to maintain consistency
across the cluster cksum is compared. In this
case as both files are different leading to different cksum, causing state
in
2018 May 01
0
unable to remove ACLs
On Tue, May 1, 2018 at 5:46 AM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi guys
>
> I have a simple case of:
> $ setfacl -b
> not working!
> I copy a folder outside of autofs mounted gluster vol, to a regular fs and
> removing acl works as expected.
> Inside mounted gluster vol I seem to be able to modify/remove ACLs for
> users, groups and masks but that one
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth
2022 Apr 15
0
c6a and m6a AMD Epyc AWS EC2 instances support for CentOS 8 AMI Marketplace 47k9ia2igxpcce2bzo8u3kj03
Hello,
We have launched some EC2 servers 6 month ago using the CentOS 8
MarketPlace AMI 47k9ia2igxpcce2bzo8u3kj03 (
https://aws.amazon.com/marketplace/pp/prodview-ndxelprnnxecs)
Now we have migrated these servers to CentOS Stream 8.
We can change the instance type of these servers until m6i and c6i Intel
based CPU, but we cannot change the instance type to new ADM Epyc c6a and
m6a. There is an
2020 Jan 01
2
KVM Random Reboots AMD EPYC Server
our new Server with AMD EPYC and super micro board reboots ramdonly.
There is no error message before the reboot in /var/log/messages.
we are running 2 Server with VMWare workstation without any problem.
The new server should run KVM.
older servers with AMD (before EPYC) running KVM without any problem.
any idea or recommendation?
--
Viele Gr??e
Helmut Drodofsky
Internet XS Service GmbH
2023 Apr 19
2
bash test ?
On 19/04/2023 08:04, wwp wrote:
> Hello lejeczek,
>
>
> On Wed, 19 Apr 2023 07:50:29 +0200 lejeczek via CentOS <centos at centos.org> wrote:
>
>> Hi guys.
>>
>> I cannot wrap my hear around this:
>>
>> -> $ unset _Val; test -z ${_Val}; echo $?
>> 0
>> -> $ unset _Val; test -n ${_Val}; echo $?
>> 0
>> -> $
2017 Aug 29
3
peer rejected but connected
hi fellas,
same old same
in log of the probing peer I see:
...
2017-08-29 13:36:16.882196] I [MSGID: 106493]
[glusterd-handler.c:3020:__glusterd_handle_probe_query]
0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0,
op_errno: 0, ret: 0
[2017-08-29 13:36:16.904961] I [MSGID: 106490]
[glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req]
0-glusterd: Received probe from uuid:
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in
"/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with
glusterd.logs and command-history.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi fellas,
> same old same
> in log of the probing peer I see:
> ...
> 2017-08-29
2017 Jul 24
2
vol status detail - times out?
hi fellas
would you know what could be the problem with: vol status
detail times out always?
After I did above I had to restart glusterd on the peer
which had the command issued.
I run 3.8.14. Everything seems to work a ok.
many thanks
L.
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick,
If one of the self heal process is down, will the statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command
2015 Jun 19
3
how do I conceptualize system & virtual users?
I guess this would be a common case, I am hoping for some
final clarification.
a few Linux boxes share ldap (multi-master) backend that
PAM/SSSD uses to authenticated users, and these LDAPs are
also is used by Samba, users start @ uid 1000.
Boxes are in the same both DNS and Samba domains.
Do I treat these users as system or virtual users from
postfix/dovecot perspective?
If it can be a
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2018 Oct 09
5
mount points @install time
hi everyone,
is there a way to add custom mount points at installation point?
And if there is would you say /usr should/could go onto a separate
partition?
many thanks, L.