Displaying 14 results from an estimated 14 matches similar to: "CESA-2017:3315 Moderate CentOS 7 kernel Security Update"
2018 Feb 07
2
/dev/md1 => 93% Used. Warning. Disk Filling up. - what would be safe to delete in /boot ?
Hello CentOS users,
in the recent time I keep getting the logwatch warnings from my 2 dedicated
servers running CentOS 7.4.1708.
I guess because of the numerous kernel updates (because of
Spectre+Meltdown) in the near past?
Could someone please suggest me, which files in my /boot partition would be
safe to delete?
I would like to avoid the situation of having to boot the rescue partiton
etc.
2018 Jan 12
1
Is kernel-3.10.0-693.11.6.el7 tested with old hardware?
Today we tried to update machines w/ Core2 Duo E6750 from
3.10.0-693.11.1.el7.centos.plus to 3.10.0-693.11.6.el7.centos.plus and
the machines did not boot due to a kernel panic.
Before we dig any further I wanted to know if the 11.6 kernel has been
tested on old hardware, too, or if the problem is well known but not
documented (yet).
Thank you in advance!
Gerhard Schneider
--
Gerhard
2018 Jan 08
0
CentOS 7.4 fails to boot as Xen PV guest: resurfaces (now also) with centosplus kernel 693.11.6.el7
On Sun, Jan 7, 2018 at 12:07 PM, David Groep <davidg at nikhef.nl> wrote:
> Dear all,
>
> Maybe I'm the only one - so before filing it as a bug: it appears that
> the latest set of kernel patches in 3.10.0-693.11.6.el7 makes issue
> 0013763 "CentOS 7.4 kernel (3.10.0-693*) fails to boot as Xen PV guest"
> re-surface *also* with the CentOS PLUS kernel. But
2018 Jan 07
2
CentOS 7.4 fails to boot as Xen PV guest: resurfaces (now also) with centosplus kernel 693.11.6.el7
Dear all,
Maybe I'm the only one - so before filing it as a bug: it appears that
the latest set of kernel patches in 3.10.0-693.11.6.el7 makes issue
0013763 "CentOS 7.4 kernel (3.10.0-693*) fails to boot as Xen PV guest"
re-surface *also* with the CentOS PLUS kernel. But maybe in a
different way ...
Thanks to the (great!) quick work on making the plus kernel available
(in #14330,
2011 Sep 06
1
(mount.ocfs2, 3315, 4):ocfs2_global_read_info:403 ERROR: status = 24
Hi List,
i've upgraded some machines to linux kernel from 2.8.38 to 3.0.4. Now
i'm always seeing this message when mounting an ocfs2 volume:
[ 38.745584] (mount.ocfs2,3315,4):ocfs2_global_read_info:403 ERROR:
status = 24
[ 38.776395] (mount.ocfs2,3315,4):ocfs2_global_read_info:403 ERROR:
status = 24
ocfs2-tools 1.6.3-1
Stefan
2017 Dec 15
0
2.1 to 2.2 server migration Qs: sanity check, config ?
Please read between the lines =)
at least you should remove autocreate plugin.
> On December 15, 2017 at 4:47 PM voytek at sbt.net.au wrote:
>
>
> I have an old Centos 6 running dovecot 2.1.17 with Postfix 2.1x, mysql
> virtual domains, in the process of setting a new Centos 7 to migrate,
> copied /etc/dovecot, made some minor edits to get rid of errors, added
>
2018 Jan 02
0
DHCP timeout and mysteriously dropping IP address
Hi everyone,
I'me having trouble with a CentOS 7 guest running on a Hyper-V host. For
some reason, the CentOS guest randomly drops its IP address. Running
"systemctl restart NetworkManager" on the console will restore IP
connectivity without a reboot. I think that DHCP is timing out, but I'm not
sure what to do about it. Is there a way to tell NetworkManager to keep
trying after
2017 Dec 15
2
2.1 to 2.2 server migration Qs: sanity check, config ?
I have an old Centos 6 running dovecot 2.1.17 with Postfix 2.1x, mysql
virtual domains, in the process of setting a new Centos 7 to migrate,
copied /etc/dovecot, made some minor edits to get rid of errors, added
Letsencrypt in place of self certified certs, it seems to work, using mail
client I can log on StartSSL/110/143, TLS/995/993 with no visible errors
when login on
is there any other sanity
2017 Dec 11
3
Libguestfs Hangs on CentOS 7.4
Hi,
We seem to be hitting an issue where libguestfs keeps hanging. virt-resize,
guestmount etc.. never complete. If we set
LIBGUESTFS_BACKEND_SETTINGS=force_tcg it completes.
The issue starts when updating to CentOS 7.4 (CentOS 7.3 work fine). It
doesn't seem to affect all 7.4 hypervisors and the only similarity that we
have found is that they all use NVMe drives.
Non-Volatile memory
2018 Jan 10
0
Nvidia maximum pixel clock issue in, , kmod-nvidia-384.98
To: CentOS mailing list <centos at centos.org>
Thema: Re: [CentOS] Nvidia maximum pixel clock issue in
kmod-nvidia-384.98
Dear All,
Start BIOS Configuration and go to Graphic Configuration change On Board
VGA to PCI Express x16
Automatic is VGA on Board
Bis bald
Alexander
Am 09.01.2018 um 13:00 schrieb centos-request at centos.org:
> Send CentOS mailing list submissions to
>
2017 Dec 13
2
Error: stat no such file or directory with 2.2.33.2
??? We have upgraded today from Dovecot 2.2.31 to Dovecot 2.2.33.2 and
modified our config to include ITERINDEX in mail_location and we are
detecting some errors like this:
/Dec 13 08:17:31 buzon_rhel7 dovecot: imap(rboloix): Error:
stat(/buzones/location/18/48/rboloix/mailboxes/SIT - Pra&AwE-cticas
2014/dbox-Mails) failed: No such file or directory//
//Dec 13 08:40:24 buzon_rhel7
2017 Dec 19
0
kernel: blk_cloned_rq_check_limits: over max segments limit., Device Mapper Multipath, iBFT, iSCSI COMSTAR
Hi,
WARNING: Long post ahead
I have an issue when starting multipathd. The kernel complains about "blk_cloned_rq_check_limits:
over max segments limit".
The server in question is configured for KVM hosting. It boots via iBFT to an iSCSI volume. Target
is COMSTAR and underlying that is a ZFS volume (100GB). The server also has two infiniband cards
providing four (4) more paths over SRP
2018 Feb 26
1
Going back to a minimal system : strange problem
Le 26/02/2018 ? 16:12, Gordon Messmer a ?crit :
> I would hazard to guess that the flaw is simply that from time to
> time, packages are added to the minimal install as a side effect of
> adding in new dependencies. If you had a minimal install and simply
> ran "yum update", you would periodically see yum report that it would
> install new packages for dependencies, in
2018 Jan 25
3
Re: [ovirt-users] Slow conversion from VMware in 4.1
On Wed, Jan 24, 2018 at 11:49:13PM +0100, Luca 'remix_tj' Lorenzetto wrote:
> Hello,
>
> i've started my migrations from vmware today. I had successfully
> migrated over 200 VM from vmware to another cluster based on 4.0 using
> our home-made scripts interacting with the API's. All the migrated vms
> are running RHEL 6 or 7, with no SELinux.
>
> We