similar to: Upgrading from 3.4 to 4.0 beta X

Displaying 20 results from an estimated 10000 matches similar to: "Upgrading from 3.4 to 4.0 beta X"

2005 Jan 12
3
bind and 3.4
Hello, I encountered a problem when upgrading from 3.3 to 3.4 on i386. The machine is a production name server running bind. Looks like the new rpm moved my named.conf to .rpmsave and chkconfig'ed bind to off. That's really bad. A more gentle behavior would have been to save the new named.conf to .rpmnew and not mess with initscripts. Anyone else notice that? Francois Caen
2006 Jan 24
1
upgrading 3.6->4
greetings - I'm trying to upgrade a centos 3.6 box to version 4. I tried to do an upgrade using yum but that failed and now the Cd installer is not letting me upgrade either. Here's what I did: following the howto, I tried to do the upgrade using yum. I installed the GPG key and upgraded the centos-release and the centos-yumconf packages. When I tried to upgrade yum package I ran
2005 Apr 17
2
CentOS4 upgradeany hangs while loading sata_via
Hi, I have a machine running under CentOS 3.4 which I want to upgrade to CentOS 4. I booted using the CentOS 4 CD1 using "linux upgradeany". Problem is that loading sata_via never terminates. Ismod shows the sata_via module in state "Loading" while all others are in state "Live". The dmesg output shows that the module gets loaded, sees two disks, then prints info
2013 Mar 07
4
[Gluster-devel] glusterfs-3.4.0alpha2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha2/ SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.tar.gz This release is made off jenkins-release-19 -- Gluster Build System _______________________________________________ Gluster-devel mailing list Gluster-devel at nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
2005 Apr 21
5
kbd remove error
hi guys i just partially upgrade centos 3.4 to centos 4(using apt-get) and im having a problem removing kdb 1.08-10.2, kbd 1.12-2 is already installed and im trying to remove kdb 1.08 through apt-get and rpm -e here's what happened when i use apt-get: # apt-get remove kbd#1.08-10.2 Reading Package Lists... Done Building Dependency Tree... Done The following packages will be REMOVED:
2007 Aug 31
1
rpmsave files and pagasus
Hello I just upgraded Centos, and now doing some post upgarde task to make sure every thing is fine. First of all, it seems upgrade add rpmsave extension to configuration files, which we should copy them back example is /etc/httpd/conf/httpd.conf.rpmsave also what is Pegasus ? there is a lot of rpmsaved in /var/lib/pegasus example :
2006 Jan 14
1
Getting rid of all the .rpmsave files
Hi list! After yum upgrade of my CentOS 4.0 to 4.2 x86_64 box I now have countless .rpmsave files, about 90% of them I never touched or are config/start scripts. Does anyone have a neat script that will find all the rpmsave stuff in /etc and then prompts per file whether it can replace the original or not? Somehow doing this all by hand doesn't seem a very attractive idea :) Thanks!!
2005 Oct 17
2
Can't install CentOS4
Hi everyone, My email server is running CentOS 3 and I can't upgrade it to CentOS 4 using the CD's. At first I thought it was the old Megaraid card (that's no longer supported), so I replaced that with a new card. I've tried ISO's from CentOS 4, 4.1 and now 4.2 which I've tested and booted in other machines. Just not this one. I've never had a problem like this
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, I applied the workarround for this bug and now df shows the right size: [root at stor1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1 stor1data:/volumedisk0 101T 3,3T 97T 4% /volumedisk0 stor1data:/volumedisk1
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi, Some days ago all my glusterfs configuration was working fine. Today I realized that the total size reported by df command was changed and is smaller than the aggregated capacity of all the bricks in the volume. I checked that all the volumes status are fine, all the glusterd daemons are running, there is no error in logs, however df shows a bad total size. My configuration for one volume:
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. The "shared-brick-count" values seem fine on stor1. Please send us "grep -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes so we can check if they are the cause. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2012 May 25
4
Upgrading FC2 to CentOS 5.* - anyone second this?
Greetings, I *do* still have an FC2 box. Would anyone second this procedure: http://www.centos.org/modules/newbb/viewtopic.php?topic_id=14052&forum=37&post_id=47945 Thanks. Max Pyziur pyz at brama.com
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume). Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar . For volumedisk1 I
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > I applied the workarround for this bug and now df shows the right size: > > That is good to hear. > [root at stor1 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 > /dev/sdc1
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > My initial setup was composed of 2 similar nodes: stor1data and stor2data. > A month ago I expanded both volumes with a new node: stor3data (2 bricks > per volume). > Of course, then to add the new peer with the bricks I did the 'balance > force' operation.
2004 Nov 30
5
RE: [Shorewall-devel] SFTP
On Tue, 2004-11-30 at 12:17 +0700, Matthew Hodgett wrote: > > As for the 169.254 issue I tried to search the archives but got nothing. > I then tried to search on generic words, nothing. I then tried some > really common words like ''help'', ''initiated'', ''masq'' - nothing. I think > the index might be corrupt because I get no
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message. Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- -----------
2005 Jun 23
3
Automated YUM update kills DNS
We've got several CentOS 3.x systems running DNS that we keep updated automatically via YUM. Recently two of those systems (not all of them) when updating themselves to the latest versions of BIND, automatically replaced /etc/named.conf with a new one and saved the old one as /etc/named.conf.rpmsave. Which of course broke DNS for those servers. All servers got updated, but only two of
2005 Jul 30
2
How does CentOS compare with FC 3
I have FC 3 running on my "play/test" box. Is CentOS (RHEL) a level above? Can I do an Upgrade or should I do an Install? Todd -- -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20050730/05888be3/attachment-0001.html> -------------- next part -------------- A non-text attachment was