similar to: Beware - Yum 3.5 to 3.6 upgrade replaces named.conf

Displaying 20 results from an estimated 10000 matches similar to: "Beware - Yum 3.5 to 3.6 upgrade replaces named.conf"

2005 Jun 23
3
Automated YUM update kills DNS
We've got several CentOS 3.x systems running DNS that we keep updated automatically via YUM. Recently two of those systems (not all of them) when updating themselves to the latest versions of BIND, automatically replaced /etc/named.conf with a new one and saved the old one as /etc/named.conf.rpmsave. Which of course broke DNS for those servers. All servers got updated, but only two of
2005 Nov 15
0
Re: centos] Beware - Yum 3.5 to 3.6 upgrade replacesnamed.conf -- not a YUM issue ...
"Bzzzt wrong --- yada" is contentious, we don't need it. Please. 2: It's not rpm, it's the packager => "The killer is innocent, the *gun* killed the victim" " the gun didn't kill him, the *bullet* did" "no, the wound did!" "no, the loss of blood did" "no, dying did". *jeesh* Brian Brunner brian.t.brunner at
2016 May 06
4
yum update (first in a long time) - /var/log/dovecot no longer used
On Thursday 05 May 2016 17:16:17 Valeri Galtsev wrote: > There were several heated discussions on this list, and elsewhere. This is > not intended to start the new one, but to help someone who missed them to > define their statute. > > People split into two groups: > > Opponents of systemd (, firewqalld, etc.) who argue that from formerly > Unix-like system Linux becomes
2004 Sep 21
1
yum configuration files lost while/after updating ?
Hello centOS users, Today i've installed a fresh centos 3.1, and modified the [update] section of my/etc/yum.conf, to point on my own update repository. My update repository (3.1) contains exactly the same content that the official one. Update was successful but now /etc/yum.conf is missing :( Here it is what i've noticed during the update process: # cat /etc/redhat-release CentOS
2004 Nov 30
5
RE: [Shorewall-devel] SFTP
On Tue, 2004-11-30 at 12:17 +0700, Matthew Hodgett wrote: > > As for the 169.254 issue I tried to search the archives but got nothing. > I then tried to search on generic words, nothing. I then tried some > really common words like ''help'', ''initiated'', ''masq'' - nothing. I think > the index might be corrupt because I get no
2005 Jun 09
3
yum overwriting
Is there a way to keep yum from overwriting my yum.conf when i update the machine? -- Computer House Calls, Networks, Security, Web Design: http://www.emmanuelcomputerconsulting.com What businesses are in Brunswick, Maryland? Check Brunswick First! http://www.checkbrunswickfirst.com My "Foundation" verse: Isa 54:17 No weapon that is formed against thee shall prosper; and every
2005 Feb 05
9
Hot Fallover
Hello List: Recently our shorewall FW server went dead (PS failure) & brought the entire system down. Luckily we are testing the FW and other servers, so we did not loose anything. Now we have decided to setup two Shorewall FW servers with a primary & another fallover FW server. I have done some research cruised the Internet and found that a product ''UCARP''
2004 Nov 20
1
yum.conf issue
i just did a fresh install of centos-3.1 i ran yum -y update to get all the latest updates.. when it finished the update... of course the yum.conf symlink was broken... and now i how a /etc/centos-yum.conf /etc/yum.conf.rpmsave /etc/yum.conf-SAVE /etc/yum.conf <---broken symlink which of these files to i create a symlink to for /etc/yum.conf so i can continue to get my updates.
2008 Jul 09
2
Bind update overwrites named.conf
I just had a customer's bind server lose all of it's local DNS records. Yum updated the bind packages this morning at ~6am, and replaced the original /etc/named.conf file, saving the old as named.conf.rpmsave. This seems like the opposite of what it should have done (i.e. save the new file as named.conf.rpmnew). There does not appear to be any difference between the originally shipped
2013 Mar 07
4
[Gluster-devel] glusterfs-3.4.0alpha2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha2/ SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.tar.gz This release is made off jenkins-release-19 -- Gluster Build System _______________________________________________ Gluster-devel mailing list Gluster-devel at nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
2006 Jan 14
1
Getting rid of all the .rpmsave files
Hi list! After yum upgrade of my CentOS 4.0 to 4.2 x86_64 box I now have countless .rpmsave files, about 90% of them I never touched or are config/start scripts. Does anyone have a neat script that will find all the rpmsave stuff in /etc and then prompts per file whether it can replace the original or not? Somehow doing this all by hand doesn't seem a very attractive idea :) Thanks!!
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, I applied the workarround for this bug and now df shows the right size: [root at stor1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1 stor1data:/volumedisk0 101T 3,3T 97T 4% /volumedisk0 stor1data:/volumedisk1
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi, Some days ago all my glusterfs configuration was working fine. Today I realized that the total size reported by df command was changed and is smaller than the aggregated capacity of all the bricks in the volume. I checked that all the volumes status are fine, all the glusterd daemons are running, there is no error in logs, however df shows a bad total size. My configuration for one volume:
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. The "shared-brick-count" values seem fine on stor1. Please send us "grep -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes so we can check if they are the cause. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume). Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar . For volumedisk1 I
2016 May 05
2
[MASSMAIL] Re: yum update (first in a long time) - /var/log/dovecot no longer used
On Thursday 05 May 2016 15:19:47 John Hodrien wrote: > > I'd take a stab at: > > journalctl -fu dovecot > > The full RHEL7 System Administrators Guide is well worth a read, but here's > the bit you're probably after. > > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/ht >ml/System_Administrators_Guide/s1-Using_the_Journal.html
2007 Aug 31
1
rpmsave files and pagasus
Hello I just upgraded Centos, and now doing some post upgarde task to make sure every thing is fine. First of all, it seems upgrade add rpmsave extension to configuration files, which we should copy them back example is /etc/httpd/conf/httpd.conf.rpmsave also what is Pegasus ? there is a lot of rpmsaved in /var/lib/pegasus example :
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > I applied the workarround for this bug and now df shows the right size: > > That is good to hear. > [root at stor1 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 > /dev/sdc1
2009 Jun 12
1
Any problem of auto updating using yum?
Hello, all. My systems are centos 4.x or 5.x(i386 and x86_64) and various services(apache, mysql, java, sendmail... etc..). and I would like to set auto update using yum. But some staffs didn't agree my auto update plan, because some services can be effected by auto update. There were no side effects just yum updating until now, and it seems impossible for me to check
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------