Displaying 12 results from an estimated 12 matches for "conf_dir".
Did you mean:
conf_def
2007 Feb 14
9
managing multiple files
How can I express the following in puppet?
$http_conf = "/etc/http/conf/httpd.conf"
$vhosts_conf = "/etc/http/conf/vhosts.conf"
@files = ("$httpd_conf", "$vhosts_conf")
foreach f (@files) {
file { "$f":
owner => root, group => root, mode => 664,
source => "puppet://$server/apache/$f",
}
2007 Jan 26
4
mongrel_cluster 0.2.2 prerelease
Hey y''all:
I''ve added some new stuff to mongrel_cluster. Give it try and let me
know if it works for you.
* Added ''--clean'' to cluster::start to force removal of the pidfile
before trying to start the cluster member. This is useful for recovering
from unexpected process death.
* Added ''--only PORT'' to cluster::* to support running a
2006 May 30
6
Getting /etc/mongrel_cluster and startup script to work?
Hi,
I''m trying to setup my mongrel clusters to start on boot. I have
followed the instructions under the "On Boot Initialization Setup"
section at http://mongrel.rubyforge.org/docs/mongrel_cluster.html.
When I try to start it up, I get the following error:
me at mybox.com: sudo /etc/init.d/mongrel_cluster start
Starting all mongrel_clusters...
!!! Path to log file not
2012 Feb 02
1
CentOS6 virt-manager fails
...383, in <module>
main()
File "/usr/share/virt-manager/virt-manager.py", line 315, in main
config = virtManager.config.vmmConfig(appname, appversion, glade_dir)
File "/usr/share/virt-manager/virtManager/config.py", line 96, in __init__
self.conf.add_dir(self.conf_dir, gconf.CLIENT_PRELOAD_NONE)
The /tmp dbus socket doesn't exist, but dbus-daemon is running.
The only references to this problem I could find were old and unanswered
posts to Fedora lists.
2006 Aug 20
1
Problem with: service mongrel_cluster start
Hi all,
I am having trouble transitioning from fastcgi to Mongrel. I have it
all working apart from one small part, I can''t get the service to
start on boot.
I have followed the instructions and if I call the script directly:
i.e. "/etc/init.d/mongrel_cluster start" it works fine, but if I call
it using "service mongrel_cluster start" it fails with the
2014 Oct 27
0
Error starting Virtual Machine Manager: Failed to contact configuration server...
...383, in <module>
main()
File "/usr/share/virt-manager/virt-manager.py", line 315, in main
config = virtManager.config.vmmConfig(appname, appversion, glade_dir)
File "/usr/share/virt-manager/virtManager/config.py", line 98, in __init__
self.conf.add_dir(self.conf_dir, gconf.CLIENT_PRELOAD_NONE)
GError: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Could not send message to...
2018 Oct 01
7
[PATCH v2 API PROPOSAL 0/5] inspection Add network interfaces to inspection data.
The proposed API is the same as v1, but this includes an
implementation (for /etc/sysconfig/network-scripts/ifcfg-*) and
modifications to virt-inspector. This compiles and works.
If you look in patch 5 you can see proposed output as virt-inspector
XML for a guest (although this guest has not been booted, so a real
guest would hopefully have a hwaddr="MAC" attribute too).
Rich.
2006 Jun 20
3
Running Mongrel Cluster on boot
I''ve installed pen and mongrel with mongrel_cluster (as gems) on
Debian sarge. I can run everything from the command line fine
including starting from the /etc/init.d/mongrel_cluster script
On boot though mongrel_cluster fails with this msg:
/usr/local/lib/site_ruby/1.8/rubygems.rb:204:in
`report_activate_error'': Could not find RubyGem mongrel_cluster (> 0)
(Gem::LoadError)
2014 Oct 27
0
How could the admin do to grant me with permission to run virsh as unprivileged user?
...in <module>
main()
File "/usr/share/virt-manager/virt-manager.py", line 315, in main
config = virtManager.config.vmmConfig(appname, appversion, glade_dir)
File "/usr/share/virt-manager/virtManager/config.py", line 98, in __init__
self.conf.add_dir(self.conf_dir, gconf.CLIENT_PRELOAD_NONE)
GError: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Could not send message t...
2013 Nov 25
2
mcp ping return no responses
...= 300
#registration = Meta
# Middleware
connector = activemq
plugin.activemq.pool.1.host = puppet.test.italy.cloudlabcsi.local
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = mcollective
plugin.activemq.pool.1.password = mcopwd
plugin.activemq.pool.1.ssl = 0
# NRPE
#plugin.nrpe.conf_dir = /etc/nrpe.d
# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml
logger_type = file
keeplogs = 5
max_log_size = 2097152
logfacility = user
------------------
The log
D, [2013-11-25T12:46:36.725721 #7697] DEBUG -- : pluginmanager.rb:167:in
`loadclass'' Loading Mcollecti...
2010 Feb 11
1
[PATCH] Provides a reference implementation management server.
...http://code.gustavonarea.net/repoze.who-testutil/).
+
+ """
+
+ application_under_test = 'main_without_authn'
+
+ def setUp(self):
+ """Method called by nose before running each test"""
+ # Loading the application:
+ conf_dir = config.here
+ wsgiapp = loadapp('config:test.ini#%s' % self.application_under_test,
+ relative_to=conf_dir)
+ self.app = TestApp(wsgiapp)
+ # Setting it up:
+ test_file = path.join(conf_dir, 'test.ini')
+ cmd = SetupComma...
2010 Feb 17
0
[PATCH] Provides the new node lifecycle events.
...http://code.gustavonarea.net/repoze.who-testutil/).
-
- """
-
- application_under_test = 'main_without_authn'
-
- def setUp(self):
- """Method called by nose before running each test"""
- # Loading the application:
- conf_dir = config.here
- wsgiapp = loadapp('config:test.ini#%s' % self.application_under_test,
- relative_to=conf_dir)
- self.app = TestApp(wsgiapp)
- # Setting it up:
- test_file = path.join(conf_dir, 'test.ini')
- cmd = SetupComma...