similar to: ANNOUNCE: Fedora/RHEL packages for 0.23.0

Displaying 20 results from an estimated 20000 matches similar to: "ANNOUNCE: Fedora/RHEL packages for 0.23.0"

2007 Jul 27
6
puppet-0.23.1 rpm''s
Hi, I''ve just built rpm''s for puppet-0.23.1 - since there were a few people who had trouble with the update for 0.23.0 (the change in config files requires a bit of rpm trickery), I decided to be a little more cautious in how I push the new packages. So far, new packages are (or, will be shortly) available in rawhide, Fedora 7 updates-testing, and in my yum repos for
2007 Feb 12
9
New rpms for lockdir problem
I just built new puppet rpm''s for Fedora (puppet-0.22.1-2) that fix the lockdir problem that many of you have encountered. The packages should show up on a mirror near you very soon. Unfortunately, I am having trouble getting to my buildsystem that I use for the RHEL versions of the RPM; I made the source rpm available on my people page[1] If you need the RPM for RHEL, you need to
2006 Nov 21
5
strange puppetd error
Greetings, I''m trying to set up a minimal CentOS 4 kickstart that installs puppet during the %post and then use puppet to build up the rest of the configuration. I''ve gotten things so that puppet 0.20.1-2 will install using yum during the %post in the kickstart, but now I''m seeing this error when it tries to run during %post and after I reboot the newly installed
2007 Jun 24
3
Facter operatingsystemrelease on Fedora
Currently, facter returns the kernel version as both the kernel and operatingsystemrelease facts. I would like to change that so that on Fedora, operatingsystemrelease is the release number (5,6,7 etc.) or ''Rawhide'' ... are there any objections to making that change ? David
2009 Jun 12
7
Obtaining puppet and facter for RHEL5/Centos5
What''s the correct yum repo to use for installing Puppet & Facter on RHEL5 and Centos5? I used to get them from the dlutter-rhel5 repo but this seems to be massively out of date now - latest version of puppet-server in there is 0.24.5-1.el5 and facter 1.5.4-1.el5. In Epel I see puppet-server 0.24.8-1.el5.1 and facter 1.5.4-1.el5 which is better but isn''t 1.5.4 the version
2012 Dec 17
0
Status of libguestfs in Fedora, RHEL
Fedora ? ??? Fedora 16 libguestfs 1.16 (currently 1.16.34) ? ??? Fedora 17 libguestfs 1.18 (currently 1.18.11) ? ??? Fedora 18 libguestfs 1.20 (currently 1.20.0) ? ??? Rawhide libguestfs 1.21 (development releases as usual) RHEL ? ??? RHEL 5 ancient libguestfs 1.2 ? ? (DO NOT USE
2012 Aug 10
1
Status of libguestfs in Fedora, RHEL
Just a note (mainly to get it clear for *me*!) where we are with libguestfs branches and Fedora/RHEL versions: Fedora ? ??? Fedora 16 libguestfs 1.16 (currently 1.16.29) ? ??? Fedora 17 libguestfs 1.18 (currently 1.18.6) ? ??? Fedora 18 (just libguestfs 1.19.28, will move to ? branched) stable 1.20 branch when it is ready ? ???
2006 Apr 13
0
horde updates and config files
Horde RPMs got updated last night. However, the way configuration files were handled wasn't probably the best. It moved my configuration files to rpmsave, instead of leaving them as-is and putting new configs into rpmnew (like most packages do). warning: /usr/share/horde/config/conf.php saved as /usr/share/horde/config/conf.php.rpmsave warning:
2007 Aug 31
1
rpmsave files and pagasus
Hello I just upgraded Centos, and now doing some post upgarde task to make sure every thing is fine. First of all, it seems upgrade add rpmsave extension to configuration files, which we should copy them back example is /etc/httpd/conf/httpd.conf.rpmsave also what is Pegasus ? there is a lot of rpmsaved in /var/lib/pegasus example :
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > I applied the workarround for this bug and now df shows the right size: > > That is good to hear. > [root at stor1 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 > /dev/sdc1
2007 Feb 04
5
package provider multiple defaults
I am working on some initial testing of puppet and noticed that when using the package type, I see this in my client logs (from different runs): warning: Found multiple default providers for package: up2date, yum; using up2date warning: Found multiple default providers for package: yum, up2date; using yum This is on a CentOS 4.4 client with puppet-0.22.0-1.el4 (from dlutter repo). After
2018 Jan 23
1
[PATCH nbdkit] Change the default protocol to newstyle.
nbdkit <= 1.1.28 defaulted to the oldstyle protocol for compatibility with qemu and libguestfs. However qemu >= 2.6 can now work with either protocol and is widely installed. Also newstyle is required for newer features such as export names and TLS. In addition nbd-client dropped support for oldstyle entirely. You can select the oldstyle protocol by adding ‘-o’, and it is still tested.
2019 Sep 17
0
[PATCH libnbd 2/2] api: New API for reading NBD protocol.
This commit adds a new API which can be used from the connected to state to read back which NBD protocol (eg. oldstyle, newstyle-fixed) we are using. It was helpful to add a new state in newstyle negotiation (%NEWSTYLE.FINISHED) so we can route all successful option negotiations through a single path before moving to the %READY state, allowing us to set h->protocol in one place. ---
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, I applied the workarround for this bug and now df shows the right size: [root at stor1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1 stor1data:/volumedisk0 101T 3,3T 97T 4% /volumedisk0 stor1data:/volumedisk1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > My initial setup was composed of 2 similar nodes: stor1data and stor2data. > A month ago I expanded both volumes with a new node: stor3data (2 bricks > per volume). > Of course, then to add the new peer with the bricks I did the 'balance > force' operation.
2007 Dec 17
21
New error in Centos 5.1
Just started a "pilot" puppet server for real after messing around in VMs for the past week or so... I used the 0.24.0 since it was available, and on the test run, got this: err: Could not prefetch package provider ''yum'': Execution of ''/usr/bin/python /usr/lib/ruby/site_ruby/1.8/puppet/provider/package/yumhelper.py'' returned 512: /usr/bin/python:
2015 Jul 06
0
Can't install gmime22
Hello list, I'm trying to install gmime22 package which is one of the packages reported as required by ./contrib/scripts/install_prereq test. Whatever I do I'm getting to a dead end. On the regular yum repositories that I use (centos, epel, rpmforge, asterisk, digium) it is not found. I've found it on Fedora repositories however trying to use those I get all sorts of errors: On
2019 Sep 17
1
[libnbd PATCH] api: Add nbd_get_structured_replies_negotiated
Similar to nbd_get_tls_negotiated, for observing what we actually settled on with the server, rather than what was requested. --- generator/generator | 30 +++++++++++++++++++++++++----- lib/handle.c | 6 ++++++ tests/meta-base-allocation.c | 15 +++++++++++++++ tests/oldstyle.c | 7 ++++++- 4 files changed, 52 insertions(+), 6 deletions(-) diff --git
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message. Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- -----------
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume). Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar . For volumedisk1 I