Displaying 20 results from an estimated 10000 matches similar to: "Fedora Server - as an alternative ?"
2018 Dec 20
3
Fedora Server - as an alternative ?
On 20/12/2018 15:33, mark wrote:
> lejeczek via CentOS wrote:
>> hi guys
>>
>> I wonder if any Centosian here have done something different than only
>> contemplated using Fedora Server, actually worked on it in test/production
>> envs.
>>
>> If here are some folks who have done it I want to ask if you deem it to
>> be a viable option to put it on
2018 Dec 20
1
Fedora Server - as an alternative ?
On 12/20/18 10:07 AM, Patrick B?gou wrote:
> Le 20/12/2018 ? 14:11, lejeczek via CentOS a ?crit?:
>> hi guys
>>
>> I wonder if any Centosian here have done something different than only
>> contemplated using Fedora Server, actually worked on it in
>> test/production envs.
>>
>> If here are some folks who have done it I want to ask if you deem it
2018 Dec 20
0
Fedora Server - as an alternative ?
lejeczek via CentOS wrote:
> On 20/12/2018 15:33, mark wrote:
>
>> lejeczek via CentOS wrote:
>>> hi guys
>>>
>>> I wonder if any Centosian here have done something different than
>>> only contemplated using Fedora Server, actually worked on it in
>>> test/production envs.
>>>
>>> If here are some folks who have done it I
2018 Dec 20
0
Fedora Server - as an alternative ?
lejeczek via CentOS wrote:
> hi guys
>
> I wonder if any Centosian here have done something different than only
> contemplated using Fedora Server, actually worked on it in test/production
> envs.
>
> If here are some folks who have done it I want to ask if you deem it to
> be a viable option to put it on at least portion of servers stack.
>
> Anybody?
>
I would
2018 Dec 21
0
Fedora Server - as an alternative ?
Robert Moskowitz wrote:
> On 12/20/18 10:33 AM, mark wrote:
>> lejeczek via CentOS wrote:
>>>
>>> I wonder if any Centosian here have done something different than
>>> only contemplated using Fedora Server, actually worked on it in
>>> test/production envs.
>>>
>>> If here are some folks who have done it I want to ask if you deem it
2018 Dec 20
0
Fedora Server - as an alternative ?
On 12/20/18 5:11 AM, lejeczek via CentOS wrote:
> hi guys
>
> I wonder if any Centosian here have done something different than only
> contemplated using Fedora Server, actually worked on it in
> test/production envs.
>
> If here are some folks who have done it I want to ask if you deem it to
> be a viable option to put it on at least portion of servers stack.
>
2018 Dec 20
0
Fedora Server - as an alternative ?
Le 20/12/2018 ? 14:11, lejeczek via CentOS a ?crit?:
> hi guys
>
> I wonder if any Centosian here have done something different than only
> contemplated using Fedora Server, actually worked on it in
> test/production envs.
>
> If here are some folks who have done it I want to ask if you deem it
> to be a viable option to put it on at least portion of servers stack.
>
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2019 Aug 19
1
freeIPA version vs RHEL's
On 13/08/2019 13:33, Jonathan Billings wrote:
> On Tue, Aug 13, 2019 at 01:02:58PM +0100, lejeczek via CentOS wrote:
>
>> I wonder if anybody might version of freeIPA in RHEL?
>>
>> I hear it's 4.6.6 and if that's true then when will Centos get it I
>> might ask.
> RHEL 7.7 has FreeIPA 4.6.5, and eventually CentOS will get that
> version, but it's
2016 Feb 05
2
safest way to mount iscsi loopback..
.. what is?
fellow centosians.
how to you mount your loopback targets?
I'm trying lvm backstore, I was hoping I would do it with
uuid, but it's exposed more than once and how would kernel
decide which device to use I don't know.
thanks
2017 Sep 12
2
one brick one volume process dies?
hi everyone
I have 3-peer cluster with all vols in replica mode, 9 vols.
What I see, unfortunately, is one brick fails in one vol,
when it happens it's always the same vol on the same brick.
Command: gluster vol status $vol - would show brick not online.
Restarting glusterd with systemclt does not help, only
system reboot seem to help, until it happens, next time.
How to troubleshoot this
2019 Aug 13
2
freeIPA version vs RHEL's
hi guys
I wonder if anybody might version of freeIPA in RHEL?
I hear it's 4.6.6 and if that's true then when will Centos get it I
might ask.
many thanks, L.
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
Apart from above info, please provide glusterd logs, cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9 vols.
> What I see, unfortunately, is one brick
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2020 Feb 24
4
Encrypted container on CentOS VPS
On 2020-02-24 14:37, lejeczek via CentOS wrote:
>
>
> On 24/02/2020 10:26, Roberto Ragusa wrote:
>> On 2020-02-24 10:51, lejeczek via CentOS wrote:
>>> g) remember!! still at least (depending how you mount it)
>>> the 'root' will have access to that data while mounted,
>>> obviously!
>>
>> More than that: the root user will be able to
2022 Jun 23
2
NUT on Windows revival
Hello all,
After a hectic month in private life, as a byproduct I've got a viable
merger of last released NUT 2.6.5 based Windows-ready codebase (thanks to
the giants active a dozen years ago, on whose shoulders I stood today) and
modern 2.8.x/master, fixing the merge conflicts and build warnings. Some
details were tracked in discussion of
https://github.com/networkupstools/nut/issues/5
2022 Jun 23
2
NUT on Windows revival
Hello all,
After a hectic month in private life, as a byproduct I've got a viable
merger of last released NUT 2.6.5 based Windows-ready codebase (thanks to
the giants active a dozen years ago, on whose shoulders I stood today) and
modern 2.8.x/master, fixing the merge conflicts and build warnings. Some
details were tracked in discussion of
https://github.com/networkupstools/nut/issues/5