Hi @all,
I''m trying to setup an openstack test cluster with one controller node
and
three compute nodes. Therefor I''ve used the puppetlabs openstack
modules.
On the controller node I''ve used:
- openstack::auth_file
- openstack::controller
- openstack::repo
- openstack::repo::yum_refresh
- openstack::test_file
On the compute node I''ve used:
- openstack::compute
- openstack::repo
- openstack::repo::yum_refresh
The configuration is completely done with parameters. On the controller
node I specified the following parameters (the rest remain default as
specified in params.pp):
openstack::auth_file admin_password s3cret
openstack::controller admin_email john.doe@example.local
admin_password s3cret
bridge_interface eth1
cinder_db_password s3cret
cinder_user_password s3cret
floating_range 172.17.0.128/25
glance_api_servers 127.0.0.1:9292
glance_db_password s3cret
glance_user_password s3cret
horizon_app_links "
http://monitor.example.local/"
keystone_admin_token keystone_admin_token
keystone_db_password s3cret
multi_host true
mysql_root_password s3cret
nova_db_password s3cret
nova_user_password s3cret
private_interface eth1
public_address 192.168.1.1
public_interface eth0
quantum false
rabbit_password s3cret
secret_key s3cret
verbose true
openstack::test_file floating_ip true
quantum false
sleep_time 120
On the compute nodes the configuration is like this (for testing I have
both, KVM and QUEMU nodes):
openstack::compute cinder_db_password s3cret
db_host controller1.example.local
fixed_range 10.0.0.0/24
glance_api_servers
controller1.example.local:9292
internal_address 192.168.1.2
keystone_host controller1.example.local
libvirt_type qemu
multi_host true
nova_db_password s3cret
nova_user_password s3cret
private_interface eth1
public_interface eth0
purge_nova_config false
quantum false
quantum_user_password s3cret
rabbit_host controller1.example.local
rabbit_password s3cret
setup_test_volume true
verbose true
vncproxy_host controller1.example.local
Preparations with volume groups as stated in the module documentation are
done before installation. The installation is working so far, I can connect
to the controller node but several things don''t work as expected. I.e.,
when I go to the system info page I only see services from the controller
node but no service from the compute nodes. I can create VMs without
storage but no VMs with storage. So I guess I did something wrong or not
completely. Does anyone know if I miss something (i.e. with the parameters)?
The platform is Scientific 6.4 with openstack modules version 2.1.0.
Regards Thomas
--
Linux ... enjoy the ride!
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to puppet-users+unsubscribe@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.
it''s really hard to look at the parameters and tell if something is missing. I would check the service logs for clues. first have a look at: /var/log/nova/nova-compute.log On Thu, Aug 29, 2013 at 3:27 AM, Thomas Bendler <thomas.bendler@gmail.com>wrote:> Hi @all, > > I''m trying to setup an openstack test cluster with one controller node and > three compute nodes. Therefor I''ve used the puppetlabs openstack modules. > On the controller node I''ve used: > > > - openstack::auth_file > - openstack::controller > - openstack::repo > - openstack::repo::yum_refresh > - openstack::test_file > > > On the compute node I''ve used: > > > - openstack::compute > - openstack::repo > - openstack::repo::yum_refresh > > > The configuration is completely done with parameters. On the controller > node I specified the following parameters (the rest remain default as > specified in params.pp): > > openstack::auth_file admin_password s3cret > openstack::controller admin_email john.doe@example.local > admin_password s3cret > bridge_interface eth1 > cinder_db_password s3cret > cinder_user_password s3cret > floating_range 172.17.0.128/25 > glance_api_servers 127.0.0.1:9292 > glance_db_password s3cret > glance_user_password s3cret > horizon_app_links " > http://monitor.example.local/" > keystone_admin_token keystone_admin_token > keystone_db_password s3cret > multi_host true > mysql_root_password s3cret > nova_db_password s3cret > nova_user_password s3cret > private_interface eth1 > public_address 192.168.1.1 > public_interface eth0 > quantum false > rabbit_password s3cret > secret_key s3cret > verbose true > openstack::test_file floating_ip true > quantum false > sleep_time 120 > > On the compute nodes the configuration is like this (for testing I have > both, KVM and QUEMU nodes): > > openstack::compute cinder_db_password s3cret > db_host controller1.example.local > fixed_range 10.0.0.0/24 > glance_api_servers > controller1.example.local:9292 > internal_address 192.168.1.2 > keystone_host controller1.example.local > libvirt_type qemu > multi_host true > nova_db_password s3cret > nova_user_password s3cret > private_interface eth1 > public_interface eth0 > purge_nova_config false > quantum false > quantum_user_password s3cret > rabbit_host controller1.example.local > rabbit_password s3cret > setup_test_volume true > verbose true > vncproxy_host controller1.example.local > > Preparations with volume groups as stated in the module documentation are > done before installation. The installation is working so far, I can connect > to the controller node but several things don''t work as expected. I.e., > when I go to the system info page I only see services from the controller > node but no service from the compute nodes. I can create VMs without > storage but no VMs with storage. So I guess I did something wrong or not > completely. Does anyone know if I miss something (i.e. with the parameters)? > > The platform is Scientific 6.4 with openstack modules version 2.1.0. > > Regards Thomas > -- > Linux ... enjoy the ride! > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to puppet-users+unsubscribe@googlegroups.com. > To post to this group, send email to puppet-users@googlegroups.com. > Visit this group at http://groups.google.com/group/puppet-users. > For more options, visit https://groups.google.com/groups/opt_out. >-- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To post to this group, send email to puppet-users@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-users. For more options, visit https://groups.google.com/groups/opt_out.