Displaying 20 results from an estimated 107 matches for "rpmsaved".
Did you mean:
rpmsave
2006 Jan 14
1
Getting rid of all the .rpmsave files
Hi list!
After yum upgrade of my CentOS 4.0 to 4.2 x86_64 box I now have countless
.rpmsave files, about 90% of them I never touched or are config/start
scripts.
Does anyone have a neat script that will find all the rpmsave stuff in
/etc and then prompts per file whether it can replace the original or not?
Somehow doing this all by hand doesn't seem a very attractive idea :)
Thanks!!
2007 Aug 31
1
rpmsave files and pagasus
...ello
I just upgraded Centos, and now doing some post upgarde task to make
sure every thing is fine.
First of all, it seems upgrade add rpmsave extension to configuration
files, which we should copy them back
example is /etc/httpd/conf/httpd.conf.rpmsave
also what is Pegasus ?
there is a lot of rpmsaved in /var/lib/pegasus
example :
/var/lib/Pegasus/prev_repository_2007-08-30-1188481293.980554182.rpmsave
Thanks
2013 Mar 07
4
[Gluster-devel] glusterfs-3.4.0alpha2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha2/
SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.tar.gz
This release is made off jenkins-release-19
-- Gluster Build System
_______________________________________________
Gluster-devel mailing list
Gluster-devel at nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2006 Aug 30
2
CentOS-4.4 update: don't forget those rpmsave and rpmnew files folks!
After you do your update, done forget to do updatedb, makewhatis, ...
The locate for rpmnew has a couple items of interest and the locate for
rpmsave returns one that occupies 24MB of your precious disks -
/var/lib/Pegasus/prev-repository*.
It compresses nicely to appx. 1MB, cpio bzipped --best.
Change in your rndc key too, for DNS.
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2006 Apr 13
0
horde updates and config files
Horde RPMs got updated last night. However, the way configuration
files were handled wasn't probably the best. It moved my
configuration files to rpmsave, instead of leaving them as-is and
putting new configs into rpmnew (like most packages do).
warning: /usr/share/horde/config/conf.php saved as
/usr/share/horde/config/conf.php.rpmsave
warning:
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2004 Nov 30
5
RE: [Shorewall-devel] SFTP
On Tue, 2004-11-30 at 12:17 +0700, Matthew Hodgett wrote:
>
> As for the 169.254 issue I tried to search the archives but got nothing.
> I then tried to search on generic words, nothing. I then tried some
> really common words like ''help'', ''initiated'', ''masq'' - nothing. I think
> the index might be corrupt because I get no
2005 Jun 23
3
Automated YUM update kills DNS
We've got several CentOS 3.x systems running DNS that we keep updated
automatically via YUM.
Recently two of those systems (not all of them) when updating themselves
to the latest versions of BIND, automatically replaced /etc/named.conf
with a new one and saved the old one as /etc/named.conf.rpmsave.
Which of course broke DNS for those servers.
All servers got updated, but only two of
2005 Jan 12
3
bind and 3.4
Hello,
I encountered a problem when upgrading from 3.3 to 3.4 on i386.
The machine is a production name server running bind. Looks like the
new rpm moved my named.conf to .rpmsave and chkconfig'ed bind to off.
That's really bad. A more gentle behavior would have been to save the
new named.conf to .rpmnew and not mess with initscripts.
Anyone else notice that?
Francois Caen
2004 Sep 21
1
yum configuration files lost while/after updating ?
Hello centOS users,
Today i've installed a fresh centos 3.1, and modified the [update]
section of my/etc/yum.conf, to point on my own update repository.
My update repository (3.1) contains exactly the same content that the
official one. Update was successful but now /etc/yum.conf is missing :(
Here it is what i've noticed during the update process:
# cat /etc/redhat-release
CentOS
2018 Jun 18
2
Updated krb5 rpm package altered existing krb5.conf - No go
> Am 15.06.2018 um 01:04 schrieb Gordon Messmer <gordon.messmer at gmail.com>:
>
> On 06/14/2018 09:30 AM, me at tdiehl.org wrote:
>> On Thu, 14 Jun 2018, Richard Grainger wrote:
>>
>>> I looked at the spec file in the source RPM for the krb5-libs package
>>> and it it has the correct %config(noreplace) directive next to that
>>> file in the
2005 Nov 15
3
Beware - Yum 3.5 to 3.6 upgrade replaces named.conf
You get so used to yum upgrades going so smoothly but
I learned the hard way to always make a thorough
inspection after a yum update. I let yum go ahead and
upgrade from 3.5 to 3.6. Afterwards I made some basic
queries to httpd, postfix and bind named (probably a
cached query). I even checked the /var/named/
directory and saw all my hosts files.
So looked like another smooth ride, well until
2006 Jul 03
2
new clamav update miss 'clamav' user/group creation/update
Hi folks,
Just updating clamav 'bundle' from old 'clamav-server' (i think the just
previous) and i noticed that the 'clamav' user/group for this pkg is not created
by default by the rpm pkg.
At the same time, the /var/log/clamav is not updated/created with clamav.clamav
ownership,
Don't know if it is my actual config (previous one untouched anyway), but this
is what