Hi all, I had a bit of time to research the existing device code to see if I can use it for an integration with two specific use cases: 1. discovery/inventory - access hardware inventory and store it somewhere where it can be retrieved. So far, device supports this use case. - specify a list of device endpoints in device.conf - run puppet device to get their facts to serve as inventory (although puppet device looks like it gets facts and requests catalogs, I will probably call the facts method directly to just get the facts) - have the front end query these facts from PuppetDB 2. management - manage the process of bringing up a cluster from scratch This is the use case where puppet device is problematic. In this use case, an external system needs to specify how a collection of resources should be configured. The types of these resources are heterogeneous, for example: - Server - Storage - Network - add Port - create server These hardware configuration rules (and their dependencies) map pretty cleanly to the Puppet DSL and the Resource/Graph model. Where a manifests represents multiple devices and multiple endpoints. I had the following issues with puppet device for this use case: 1. It iterates through the endpoints and configures them one at a time This is probably the biggest barrier. I need to keep track of a collection of resources that target multiple endpoints and apply them in a certain order. Looking at the device code it seems to just iterate through the endpoints in device.conf and configure them one at a time. I spent some time thinking about the current device command and how I might use it to configure workflows across multiple endpoints. - on the puppet master, keep a queue (or list) for each endpoint that needs to be configured - have an external process (the dispatcher) that keeps track of the configuration that needs to be applied (along with their endpoints) and stores the resources that represent that configuration into the correct queue for it''s endpoint. - have an ENC that checks the certname of a device when it checks in, maps it to a queue, and clears all entries for a queue (for it to apply) - If the dispatcher keeps track of all of the resources that it put onto which queue, it can track the report for those devices to know when it''s entire job is completed. The above explanation is the best way I could think of to use the existing device, but it is cumbersome enough that it warrants not using the device model. 2. it does not allow for the specification of dependencies between multiple device endpoints. It only allows for certain endpoints to be processed in a certain order. This is pretty much the same as #1, but worth mentioning separately. 3. It invents its own command line for doing things (it does not cleanly operate with puppet resource, puppet apply, puppet agent with represents a major loss of functionality) 4. Management of device.conf The existence of device.conf creates its own management issues. You need to assign a single node to a single device, you have to manage the process for getting the credentials to that device, you have to figure out how many devices/which devices go to which nodes as you scale out to a large number of device endpoints. *Solution:* The transport model (as created by Nan Liu) seems to get around the issues mentioned above and would allow a pretty clean integration path. For folks not familiar with the transport model. It uses regular types and providers that accept a parameter called transport that can be used to indicate that it should be applied against some remote endpoint. For example: Transport { ''ssh'': url => some_url password => ''some_password'' } port { transport => Transport[ssh] } This will work perfectly for my use case. *The problem:* This is fundamentally incompatible with the device model. I will not be able to leverage resources implemented using this model, people using the device model will not be able to leverage resources that I/we write. I would feel much more confident in transport if it was possible to still leverage the logic encoded in Puppet devices. This is impossible b/c devices label themselves in such a way that means that they can only be consumed by the puppet device command (while I would use resource, apply, and agent). Is this something we could just fix in the device type and providers? To have them either get their credentials from device.conf or transport resources? Could we get rid of the code that allows for puppet devices to only be applied using the puppet device command? Thanks for everyone that has made it this far in the email :) I look forward to some great discussions! - Dan -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/CA%2B0t2Ly_sdATdWOqTenqqKkyzgZR9WeOq5QLOhSM4008t2PFtQ%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.
On Wed, Dec 11, 2013 at 8:51 AM, Dan Bode <bodepd@gmail.com> wrote:> > I had a bit of time to research the existing device code to see if I can > use it for an integration with two specific use cases: >I''m not sure what issues are still actively worked on, and I''m keeping an eye on the redmine migration to see what gets ported over. I''ve had onsite discussion with PL developers, and I would love to get more feedback and roadmap for devices v.s. transport. For now, I''m staying with transport resources. Comments below.> 1. discovery/inventory - access hardware inventory and store it somewhere > where it can be retrieved. > > So far, device supports this use case. > - specify a list of device endpoints in device.conf > - run puppet device to get their facts to serve as inventory (although > puppet device looks like it gets facts and requests catalogs, I will > probably call the facts method directly to just get the facts) > - have the front end query these facts from PuppetDB >puppet device facts are not really invoked via facter, and have some gotchas (such as symbol keys). They are tucked away in lib/puppet/util/network_devices/<device_name>/facts.rb. However since facter is not available for puppet device, the only win for device is inventory in puppetdb. The missing functionality can be implemented as a resource for transport solution which exports facts via puppet face.> 2. management - manage the process of bringing up a cluster from scratch > > This is the use case where puppet device is problematic. > > In this use case, an external system needs to specify how a collection of > resources should be configured. The types of these resources are > heterogeneous, for example: > > - Server > - Storage > - Network > - add Port > - create server > > These hardware configuration rules (and their dependencies) map pretty > cleanly to the Puppet DSL and the Resource/Graph model. Where a manifests > represents multiple devices and multiple endpoints. >This is one of the main reason I''m using transport since it expresses cross node dependency using the existing DSL.> I had the following issues with puppet device for this use case: > > 1. It iterates through the endpoints and configures them one at a time > > This is probably the biggest barrier. I need to keep track of a collection > of resources that target multiple endpoints and apply them in a certain > order. Looking at the device code it seems to just iterate through the > endpoints in device.conf and configure them one at a time. > > I spent some time thinking about the current device command and how I > might use it to configure workflows across multiple endpoints. > - on the puppet master, keep a queue (or list) for each endpoint that > needs to be configured > - have an external process (the dispatcher) that keeps track of the > configuration that needs to be applied (along with their endpoints) and > stores the resources that represent that configuration into the correct > queue for it''s endpoint. > - have an ENC that checks the certname of a device when it checks in, maps > it to a queue, and clears all entries for a queue (for it to apply) > - If the dispatcher keeps track of all of the resources that it put onto > which queue, it can track the report for those devices to know when it''s > entire job is completed. > > The above explanation is the best way I could think of to use the existing > device, but it is cumbersome enough that it warrants not using the device > model. > > 2. it does not allow for the specification of dependencies between > multiple device endpoints. It only allows for certain endpoints to be > processed in a certain order. > > This is pretty much the same as #1, but worth mentioning separately. > > 3. It invents its own command line for doing things (it does not cleanly > operate with puppet resource, puppet apply, puppet agent with represents a > major loss of functionality) > > 4. Management of device.conf > > The existence of device.conf creates its own management issues. You need > to assign a single node to a single device, you have to manage the process > for getting the credentials to that device, you have to figure out how many > devices/which devices go to which nodes as you scale out to a large number > of device endpoints. > > *Solution:* > > The transport model (as created by Nan Liu) seems to get around the issues > mentioned above and would allow a pretty clean integration path. > > For folks not familiar with the transport model. It uses regular types and > providers that accept a parameter called transport that can be used to > indicate that it should be applied against some remote endpoint. > > For example: > > Transport { ''ssh'': > url => some_url > password => ''some_password'' > } > > port { > transport => Transport[ssh] > } > > This will work perfectly for my use case. > > *The problem:* > > This is fundamentally incompatible with the device model. I will not be > able to leverage resources implemented using this model, people using the > device model will not be able to leverage resources that I/we write. >I think it''s possible to switch between them. I''ll use the F5 module as an example which is currently puppet device: We just need check in the following order 1. If facter connection value exist, use facter value to connect (fixes problem for puppet resource inspection). 2. If resource has transport parameter use the catalog value. 3. Use the setting in device.conf Change the following code to (much abbreviated): https://github.com/nanliu/puppetlabs-f5/blob/master/lib/puppet/provider/f5.rb#L29-L39 @transport ||Puppet::Util::NetworkDevice::F5::Device.new(Facter.value(:url)).transport If Facter.value(:url) @transport ||= PuppetX::Puppetlabs::Transport.retrieve(:resource_ref => resource[:transport], :catalog => resource.catalog, :provider => ''f5'') if resource[:transport] @transport ||= Puppet::Util::NetworkDevice.current.transport I would feel much more confident in transport if it was possible to still> leverage the logic encoded in Puppet devices. This is impossible b/c > devices label themselves in such a way that means that they can only be > consumed by the puppet device command (while I would use resource, apply, > and agent). >I''m not sure what advantage of puppet device you would like to see in transport. Is this something we could just fix in the device type and providers? To> have them either get their credentials from device.conf or transport > resources? Could we get rid of the code that allows for puppet devices to > only be applied using the puppet device command? >See above. Certainly there are a few bugs such as apply_to_device that needs to be fixed, but I see a way to be flexible for both use cases. Though for the reason listed earlier, I''ll stick with transport for now. Thanks, Nan -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/CACqVBqChcaPDRoHy9SJmNU95DHbDoRcEvVFViadu--yR9gryYw%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.
Hi, On 11-12-2013 10:51:08, Dan Bode wrote:> Hi all, > > I had a bit of time to research the existing device code to see if I can > use it for an integration with two specific use cases: > > 1. discovery/inventory - access hardware inventory and store it somewhere > where it can be retrieved. > > So far, device supports this use case. > - specify a list of device endpoints in device.conf > - run puppet device to get their facts to serve as inventory (although > puppet device looks like it gets facts and requests catalogs, I will > probably call the facts method directly to just get the facts) > - have the front end query these facts from PuppetDB > > 2. management - manage the process of bringing up a cluster from scratch > > This is the use case where puppet device is problematic. > > In this use case, an external system needs to specify how a collection of > resources should be configured. The types of these resources are > heterogeneous, for example: > > - Server > - Storage > - Network > - add Port > - create server > > These hardware configuration rules (and their dependencies) map pretty > cleanly to the Puppet DSL and the Resource/Graph model. Where a manifests > represents multiple devices and multiple endpoints. > > I had the following issues with puppet device for this use case: > > 1. It iterates through the endpoints and configures them one at a time > > This is probably the biggest barrier. I need to keep track of a collection > of resources that target multiple endpoints and apply them in a certain > order. Looking at the device code it seems to just iterate through the > endpoints in device.conf and configure them one at a time.I currently use a simple solution to work around this problem where i create the device.conf through an external process on the fly and specify my devices and there dependencys in a yaml file, run them in order and just check the exit code. it looks something like this: --- defaults: scheme: sshios port: 22 userinfo: foo:bar query: crypt=true cmd: /usr/bin/puppet device --verbose --environment=network --detailed-exit-codes --deviceconfig={{DEVCFG}} || [ $? -eq 2 ] devices: dc1: sw-dc1-01.foo.bar: deps: - * sw-dc1-02.foo.bar: sw-dc1-03.foo.bar: deps: - sw-dc1-02.foo.bar str-dc1-01.foo.bar: scheme: netapp deps: - sw-dc1-01.foo.bar> > I spent some time thinking about the current device command and how I might > use it to configure workflows across multiple endpoints. > - on the puppet master, keep a queue (or list) for each endpoint that needs > to be configured > - have an external process (the dispatcher) that keeps track of the > configuration that needs to be applied (along with their endpoints) and > stores the resources that represent that configuration into the correct > queue for it''s endpoint. > - have an ENC that checks the certname of a device when it checks in, maps > it to a queue, and clears all entries for a queue (for it to apply) > - If the dispatcher keeps track of all of the resources that it put onto > which queue, it can track the report for those devices to know when it''s > entire job is completed. > > The above explanation is the best way I could think of to use the existing > device, but it is cumbersome enough that it warrants not using the device > model. > > 2. it does not allow for the specification of dependencies between multiple > device endpoints. It only allows for certain endpoints to be processed in a > certain order. > > This is pretty much the same as #1, but worth mentioning separately. > > 3. It invents its own command line for doing things (it does not cleanly > operate with puppet resource, puppet apply, puppet agent with represents a > major loss of functionality) > > 4. Management of device.conf > > The existence of device.conf creates its own management issues. You need to > assign a single node to a single device, you have to manage the process for > getting the credentials to that device, you have to figure out how many > devices/which devices go to which nodes as you scale out to a large number > of device endpoints. > > *Solution:* > > The transport model (as created by Nan Liu) seems to get around the issues > mentioned above and would allow a pretty clean integration path. > > For folks not familiar with the transport model. It uses regular types and > providers that accept a parameter called transport that can be used to > indicate that it should be applied against some remote endpoint. > > For example: > > Transport { ''ssh'': > url => some_url > password => ''some_password'' > } > > port { > transport => Transport[ssh] > } > > This will work perfectly for my use case.Can you point me to a thread where this was discussed ? I can only see an advantage of the purposed model for certain situations / device types but not for the traditional use case. Thanks, Markus -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/20131212080840.GA30091%40nox-arch.uni-ak.ac.at. For more options, visit https://groups.google.com/groups/opt_out.
On Thu, Dec 12, 2013 at 1:46 AM, Nan Liu <nan.liu@gmail.com> wrote:> On Wed, Dec 11, 2013 at 8:51 AM, Dan Bode <bodepd@gmail.com> wrote: >> >> I had a bit of time to research the existing device code to see if I can >> use it for an integration with two specific use cases: >> > > I''m not sure what issues are still actively worked on, and I''m keeping an > eye on the redmine migration to see what gets ported over. I''ve had onsite > discussion with PL developers, and I would love to get more feedback and > roadmap for devices v.s. transport. For now, I''m staying with transport > resources. Comments below. > > >> 1. discovery/inventory - access hardware inventory and store it >> somewhere where it can be retrieved. >> >> So far, device supports this use case. >> - specify a list of device endpoints in device.conf >> - run puppet device to get their facts to serve as inventory (although >> puppet device looks like it gets facts and requests catalogs, I will >> probably call the facts method directly to just get the facts) >> - have the front end query these facts from PuppetDB >> > > puppet device facts are not really invoked via facter, and have some > gotchas (such as symbol keys). They are tucked away in > lib/puppet/util/network_devices/<device_name>/facts.rb. However since > facter is not available for puppet device, the only win for device is > inventory in puppetdb. The missing functionality can be implemented as a > resource for transport solution which exports facts via puppet face. >I can''t think of a case where I need facts from a device to make a configuration decisions. Perhaps I''m just not far enough into it :)> > >> 2. management - manage the process of bringing up a cluster from scratch >> >> This is the use case where puppet device is problematic. >> >> In this use case, an external system needs to specify how a collection of >> resources should be configured. The types of these resources are >> heterogeneous, for example: >> >> - Server >> - Storage >> - Network >> - add Port >> - create server >> >> These hardware configuration rules (and their dependencies) map pretty >> cleanly to the Puppet DSL and the Resource/Graph model. Where a manifests >> represents multiple devices and multiple endpoints. >> > > This is one of the main reason I''m using transport since it expresses > cross node dependency using the existing DSL. > > >> I had the following issues with puppet device for this use case: >> >> 1. It iterates through the endpoints and configures them one at a time >> >> This is probably the biggest barrier. I need to keep track of a >> collection of resources that target multiple endpoints and apply them in a >> certain order. Looking at the device code it seems to just iterate through >> the endpoints in device.conf and configure them one at a time. >> >> I spent some time thinking about the current device command and how I >> might use it to configure workflows across multiple endpoints. >> - on the puppet master, keep a queue (or list) for each endpoint that >> needs to be configured >> - have an external process (the dispatcher) that keeps track of the >> configuration that needs to be applied (along with their endpoints) and >> stores the resources that represent that configuration into the correct >> queue for it''s endpoint. >> - have an ENC that checks the certname of a device when it checks in, >> maps it to a queue, and clears all entries for a queue (for it to apply) >> - If the dispatcher keeps track of all of the resources that it put onto >> which queue, it can track the report for those devices to know when it''s >> entire job is completed. >> >> The above explanation is the best way I could think of to use the >> existing device, but it is cumbersome enough that it warrants not using the >> device model. >> >> 2. it does not allow for the specification of dependencies between >> multiple device endpoints. It only allows for certain endpoints to be >> processed in a certain order. >> >> This is pretty much the same as #1, but worth mentioning separately. >> >> 3. It invents its own command line for doing things (it does not cleanly >> operate with puppet resource, puppet apply, puppet agent with represents a >> major loss of functionality) >> >> 4. Management of device.conf >> >> The existence of device.conf creates its own management issues. You need >> to assign a single node to a single device, you have to manage the process >> for getting the credentials to that device, you have to figure out how many >> devices/which devices go to which nodes as you scale out to a large number >> of device endpoints. >> >> *Solution:* >> >> The transport model (as created by Nan Liu) seems to get around the >> issues mentioned above and would allow a pretty clean integration path. >> >> For folks not familiar with the transport model. It uses regular types >> and providers that accept a parameter called transport that can be used to >> indicate that it should be applied against some remote endpoint. >> >> For example: >> >> Transport { ''ssh'': >> url => some_url >> password => ''some_password'' >> } >> >> port { >> transport => Transport[ssh] >> } >> >> This will work perfectly for my use case. >> >> *The problem:* >> >> This is fundamentally incompatible with the device model. I will not be >> able to leverage resources implemented using this model, people using the >> device model will not be able to leverage resources that I/we write. >> > > I think it''s possible to switch between them. I''ll use the F5 module as an > example which is currently puppet device: > > We just need check in the following order > 1. If facter connection value exist, use facter value to connect (fixes > problem for puppet resource inspection). >Yep, facts would totally work, but AFAIK, puppet resource does not invoke facter (are you saying that it should)> 2. If resource has transport parameter use the catalog value. > 3. Use the setting in device.conf >This is pretty much what I was thinking.> > Change the following code to (much abbreviated): > > https://github.com/nanliu/puppetlabs-f5/blob/master/lib/puppet/provider/f5.rb#L29-L39 > > @transport ||> Puppet::Util::NetworkDevice::F5::Device.new(Facter.value(:url)).transport > If Facter.value(:url) > @transport ||= PuppetX::Puppetlabs::Transport.retrieve(:resource_ref => > resource[:transport], :catalog => resource.catalog, :provider => ''f5'') if > resource[:transport] > @transport ||= Puppet::Util::NetworkDevice.current.transport > > I would feel much more confident in transport if it was possible to still >> leverage the logic encoded in Puppet devices. This is impossible b/c >> devices label themselves in such a way that means that they can only be >> consumed by the puppet device command (while I would use resource, apply, >> and agent). >> > > I''m not sure what advantage of puppet device you would like to see in > transport. >my main concern here is with duplicated code. Currently, even if you did your above recommendations, you still wind up with connection information encoded in both the Puppet device as well as the transport. I would like to see a reasonable pattern for how they can share code.> > Is this something we could just fix in the device type and providers? To >> have them either get their credentials from device.conf or transport >> resources? Could we get rid of the code that allows for puppet devices to >> only be applied using the puppet device command? >> > > See above. Certainly there are a few bugs such as apply_to_device that > needs to be fixed, >I might just be struggling to understand exactly how this works. appy_to_device is required for using the credentials from device.conf, and means that the resources can''t be used elsewhere?> but I see a way to be flexible for both use cases. Though for the reason > listed earlier, I''ll stick with transport for now. >> Thanks, > > Nan > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Developers" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to puppet-dev+unsubscribe@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/puppet-dev/CACqVBqChcaPDRoHy9SJmNU95DHbDoRcEvVFViadu--yR9gryYw%40mail.gmail.com > . > For more options, visit https://groups.google.com/groups/opt_out. >-- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/CA%2B0t2LzWTNGkV%2Bd9T4EHvdC%2BaQ-CjjFFxOJ6%2BwqAnV0--uTqrA%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.
On Thu, Dec 12, 2013 at 2:08 AM, Markus Burger <markus.burger@uni-ak.ac.at>wrote:> Hi, > > On 11-12-2013 10:51:08, Dan Bode wrote: > > Hi all, > > > > I had a bit of time to research the existing device code to see if I can > > use it for an integration with two specific use cases: > > > > 1. discovery/inventory - access hardware inventory and store it > somewhere > > where it can be retrieved. > > > > So far, device supports this use case. > > - specify a list of device endpoints in device.conf > > - run puppet device to get their facts to serve as inventory (although > > puppet device looks like it gets facts and requests catalogs, I will > > probably call the facts method directly to just get the facts) > > - have the front end query these facts from PuppetDB > > > > 2. management - manage the process of bringing up a cluster from scratch > > > > This is the use case where puppet device is problematic. > > > > In this use case, an external system needs to specify how a collection of > > resources should be configured. The types of these resources are > > heterogeneous, for example: > > > > - Server > > - Storage > > - Network > > - add Port > > - create server > > > > These hardware configuration rules (and their dependencies) map pretty > > cleanly to the Puppet DSL and the Resource/Graph model. Where a manifests > > represents multiple devices and multiple endpoints. > > > > I had the following issues with puppet device for this use case: > > > > 1. It iterates through the endpoints and configures them one at a time > > > > This is probably the biggest barrier. I need to keep track of a > collection > > of resources that target multiple endpoints and apply them in a certain > > order. Looking at the device code it seems to just iterate through the > > endpoints in device.conf and configure them one at a time. > > I currently use a simple solution to work around this problem where > i create the device.conf through an external process on the fly and > specify my > devices and there dependencys in a yaml file, run them in order and just > check the exit code. > > it looks something like this: > > --- > defaults: > scheme: sshios > port: 22 > userinfo: foo:bar > query: crypt=true > cmd: /usr/bin/puppet device --verbose --environment=network > --detailed-exit-codes --deviceconfig={{DEVCFG}} || [ $? -eq 2 ] > > devices: > dc1: > sw-dc1-01.foo.bar: > deps: > - * > sw-dc1-02.foo.bar: > sw-dc1-03.foo.bar: > deps: > - sw-dc1-02.foo.bar > str-dc1-01.foo.bar: > scheme: netapp > deps: > - sw-dc1-01.foo.bar > >Just to clarify, this is letting you specify the order in which resources are configured on your devices? This looks like it only allows you to specify order between types of things (and not between resources). It also looks like you are still grouping resources based on how a certain maps to a device? (so in this example, if you had a workflow that needed to configure 10 resources against 10 endpoints, this would involve updating the 10 node definitions in your site manifest?)> > > > > I spent some time thinking about the current device command and how I > might > > use it to configure workflows across multiple endpoints. > > - on the puppet master, keep a queue (or list) for each endpoint that > needs > > to be configured > > - have an external process (the dispatcher) that keeps track of the > > configuration that needs to be applied (along with their endpoints) and > > stores the resources that represent that configuration into the correct > > queue for it''s endpoint. > > - have an ENC that checks the certname of a device when it checks in, > maps > > it to a queue, and clears all entries for a queue (for it to apply) > > - If the dispatcher keeps track of all of the resources that it put onto > > which queue, it can track the report for those devices to know when it''s > > entire job is completed. > > > > The above explanation is the best way I could think of to use the > existing > > device, but it is cumbersome enough that it warrants not using the device > > model. > > > > 2. it does not allow for the specification of dependencies between > multiple > > device endpoints. It only allows for certain endpoints to be processed > in a > > certain order. > > > > This is pretty much the same as #1, but worth mentioning separately. > > > > 3. It invents its own command line for doing things (it does not cleanly > > operate with puppet resource, puppet apply, puppet agent with represents > a > > major loss of functionality) > > > > 4. Management of device.conf > > > > The existence of device.conf creates its own management issues. You need > to > > assign a single node to a single device, you have to manage the process > for > > getting the credentials to that device, you have to figure out how many > > devices/which devices go to which nodes as you scale out to a large > number > > of device endpoints. > > > > *Solution:* > > > > The transport model (as created by Nan Liu) seems to get around the > issues > > mentioned above and would allow a pretty clean integration path. > > > > For folks not familiar with the transport model. It uses regular types > and > > providers that accept a parameter called transport that can be used to > > indicate that it should be applied against some remote endpoint. > > > > For example: > > > > Transport { ''ssh'': > > url => some_url > > password => ''some_password'' > > } > > > > port { > > transport => Transport[ssh] > > } > > > > This will work perfectly for my use case. > > Can you point me to a thread where this was discussed ? >Maybe this has never been discussed in public. I happen to have worked next to Nan for a while.> I can only see an advantage of the purposed model for certain > situations / device types but not for the traditional use case. >What is the traditional use case? Management of individual devices as opposed to workflow? Is that what people want? Is it how they manage devices? I''m not even advocating that it should be the preferred model. I''m just outlining my use case, how the current model does not support it, mentioning another model that works for my use case, and pointing out that model is incompatible with the current model. The ideal outcome from my perspective would be if the device model supported both use cases (both passing in transport as a parameter vs. relying on a local configuration file) That being said, I think it does have a few advantages: - one puppet certificate can be used to manage multiple devices (don''t have to deal with cert management for every device) - transport can be serialized in through Puppet (no need for a separate process to manage how the device.conf is created) - allows for the creation of workflows that use multiple device endpoints> > Thanks, > Markus > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to puppet-users+unsubscribe@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/puppet-users/20131212080840.GA30091%40nox-arch.uni-ak.ac.at > . > For more options, visit https://groups.google.com/groups/opt_out. >-- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/CA%2B0t2LyMNFrdQEgSxRvzf4_eYF7LBc%2Bc1Ya9NFYUpbqQHjpTJw%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.
On Thu, Dec 12, 2013 at 7:21 AM, Dan Bode <bodepd@gmail.com> wrote:> > > On Thu, Dec 12, 2013 at 1:46 AM, Nan Liu <nan.liu@gmail.com> wrote: > >> On Wed, Dec 11, 2013 at 8:51 AM, Dan Bode <bodepd@gmail.com> wrote: >>> >>> I had a bit of time to research the existing device code to see if I can >>> use it for an integration with two specific use cases: >>> >> >> I''m not sure what issues are still actively worked on, and I''m keeping an >> eye on the redmine migration to see what gets ported over. I''ve had onsite >> discussion with PL developers, and I would love to get more feedback and >> roadmap for devices v.s. transport. For now, I''m staying with transport >> resources. Comments below. >> >> >>> 1. discovery/inventory - access hardware inventory and store it >>> somewhere where it can be retrieved. >>> >>> So far, device supports this use case. >>> - specify a list of device endpoints in device.conf >>> - run puppet device to get their facts to serve as inventory (although >>> puppet device looks like it gets facts and requests catalogs, I will >>> probably call the facts method directly to just get the facts) >>> - have the front end query these facts from PuppetDB >>> >> >> puppet device facts are not really invoked via facter, and have some >> gotchas (such as symbol keys). They are tucked away in >> lib/puppet/util/network_devices/<device_name>/facts.rb. However since >> facter is not available for puppet device, the only win for device is >> inventory in puppetdb. The missing functionality can be implemented as a >> resource for transport solution which exports facts via puppet face. >> > > I can''t think of a case where I need facts from a device to make a > configuration decisions. Perhaps I''m just not far enough into it :) >In theory you should be able to detect the device version and use the appropriate provider. In practice, the one place I could use this functionality required different versions of rubygem to even connect to the device. 2. management - manage the process of bringing up a cluster from scratch>>> >>> This is the use case where puppet device is problematic. >>> >>> In this use case, an external system needs to specify how a collection >>> of resources should be configured. The types of these resources are >>> heterogeneous, for example: >>> >>> - Server >>> - Storage >>> - Network >>> - add Port >>> - create server >>> >>> These hardware configuration rules (and their dependencies) map pretty >>> cleanly to the Puppet DSL and the Resource/Graph model. Where a manifests >>> represents multiple devices and multiple endpoints. >>> >> >> This is one of the main reason I''m using transport since it expresses >> cross node dependency using the existing DSL. >> >> >>> I had the following issues with puppet device for this use case: >>> >>> 1. It iterates through the endpoints and configures them one at a time >>> >>> This is probably the biggest barrier. I need to keep track of a >>> collection of resources that target multiple endpoints and apply them in a >>> certain order. Looking at the device code it seems to just iterate through >>> the endpoints in device.conf and configure them one at a time. >>> >>> I spent some time thinking about the current device command and how I >>> might use it to configure workflows across multiple endpoints. >>> - on the puppet master, keep a queue (or list) for each endpoint that >>> needs to be configured >>> - have an external process (the dispatcher) that keeps track of the >>> configuration that needs to be applied (along with their endpoints) and >>> stores the resources that represent that configuration into the correct >>> queue for it''s endpoint. >>> - have an ENC that checks the certname of a device when it checks in, >>> maps it to a queue, and clears all entries for a queue (for it to apply) >>> - If the dispatcher keeps track of all of the resources that it put onto >>> which queue, it can track the report for those devices to know when it''s >>> entire job is completed. >>> >>> The above explanation is the best way I could think of to use the >>> existing device, but it is cumbersome enough that it warrants not using the >>> device model. >>> >>> 2. it does not allow for the specification of dependencies between >>> multiple device endpoints. It only allows for certain endpoints to be >>> processed in a certain order. >>> >>> This is pretty much the same as #1, but worth mentioning separately. >>> >>> 3. It invents its own command line for doing things (it does not cleanly >>> operate with puppet resource, puppet apply, puppet agent with represents a >>> major loss of functionality) >>> >>> 4. Management of device.conf >>> >>> The existence of device.conf creates its own management issues. You need >>> to assign a single node to a single device, you have to manage the process >>> for getting the credentials to that device, you have to figure out how many >>> devices/which devices go to which nodes as you scale out to a large number >>> of device endpoints. >>> >>> *Solution:* >>> >>> The transport model (as created by Nan Liu) seems to get around the >>> issues mentioned above and would allow a pretty clean integration path. >>> >>> For folks not familiar with the transport model. It uses regular types >>> and providers that accept a parameter called transport that can be used to >>> indicate that it should be applied against some remote endpoint. >>> >>> For example: >>> >>> Transport { ''ssh'': >>> url => some_url >>> password => ''some_password'' >>> } >>> >>> port { >>> transport => Transport[ssh] >>> } >>> >>> This will work perfectly for my use case. >>> >>> *The problem:* >>> >>> This is fundamentally incompatible with the device model. I will not be >>> able to leverage resources implemented using this model, people using the >>> device model will not be able to leverage resources that I/we write. >>> >> >> I think it''s possible to switch between them. I''ll use the F5 module as >> an example which is currently puppet device: >> >> We just need check in the following order >> 1. If facter connection value exist, use facter value to connect (fixes >> problem for puppet resource inspection). >> > > Yep, facts would totally work, but AFAIK, puppet resource does not invoke > facter (are you saying that it should) >No, this was abusing facter to pass in connection info (don''t care about facts and they won''t work anyhow). So normally you can''t run puppet resource with device type resources, and this allowed inspection of resources (see more issues below): FACTER_connection=https://user:pass@url/ puppet resource network_device_type 2. If resource has transport parameter use the catalog value.>> 3. Use the setting in device.conf >> > > This is pretty much what I was thinking. > > >> >> Change the following code to (much abbreviated): >> >> https://github.com/nanliu/puppetlabs-f5/blob/master/lib/puppet/provider/f5.rb#L29-L39 >> >> @transport ||>> Puppet::Util::NetworkDevice::F5::Device.new(Facter.value(:url)).transport >> If Facter.value(:url) >> @transport ||= PuppetX::Puppetlabs::Transport.retrieve(:resource_ref => >> resource[:transport], :catalog => resource.catalog, :provider => ''f5'') if >> resource[:transport] >> @transport ||= Puppet::Util::NetworkDevice.current.transport >> >> I would feel much more confident in transport if it was possible to still >>> leverage the logic encoded in Puppet devices. This is impossible b/c >>> devices label themselves in such a way that means that they can only be >>> consumed by the puppet device command (while I would use resource, apply, >>> and agent). >>> >> >> I''m not sure what advantage of puppet device you would like to see in >> transport. >> > > my main concern here is with duplicated code. Currently, even if you did > your above recommendations, you still wind up with connection information > encoded in both the Puppet device as well as the transport. I would like to > see a reasonable pattern for how they can share code. >Let me boil it down this way. If the module is designed correctly, transport is just retrieving the catalog connection info and initializing it the same way as the facter code above. 1. Transport search catalog for connectivity info. 2. Invoke Puppet::Util::NetworkDevice::F5::Device.new(catalog_credentials) So we are just replacing device.conf with the info in the catalog and preserving everything else. It''s not much overhead and no duplication in code. The problem I have with things like vShield is I need a simultaneous connection to vCenter for objects moref id, so I can''t make it compatible with puppet device. I could see a way to add transport functionality to resources implemented via puppet devices. So if there''s a puppet device resource that manages DNS, firewall rules, loadbalancing, I could see some value adding transport. The next questions is how do you sync the source of truth in this case? Network device is often shared between many application and how would you get a complete picture when you start decentralizing control? Is this something we could just fix in the device type and providers? To>>> have them either get their credentials from device.conf or transport >>> resources? Could we get rid of the code that allows for puppet devices to >>> only be applied using the puppet device command? >>> >> >> See above. Certainly there are a few bugs such as apply_to_device that >> needs to be fixed, >> > > I might just be struggling to understand exactly how this works. > appy_to_device is required for using the credentials from device.conf, and > means that the resources can''t be used elsewhere? >There was suppose to be three types of resources, apply_to_device, apply_to_host(???), apply_to_both(???). apply_to_device resource only works with puppet device command, apply_to_host only with puppet agent, and apply_to_both should work with both. I was hoping to toggle resource such as notify to apply_to_both, but the feature is broken. apply_to_device also prohibits device type resource to be used in puppet resource, so I actually kept two sets of resources so I can use them in puppet resource and puppet device. HTH, Nan -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/CACqVBqDGwMY6bUgXAQ34jhs3tYZyLv2UXa0decgo4hUiTjAqiQ%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.
On Thu, Dec 12, 2013 at 12:08 AM, Markus Burger <markus.burger@uni-ak.ac.at>wrote:> Hi, > > On 11-12-2013 10:51:08, Dan Bode wrote: > > Hi all, > > > > I had a bit of time to research the existing device code to see if I can > > use it for an integration with two specific use cases: > > > > 1. discovery/inventory - access hardware inventory and store it > somewhere > > where it can be retrieved. > > > > So far, device supports this use case. > > - specify a list of device endpoints in device.conf > > - run puppet device to get their facts to serve as inventory (although > > puppet device looks like it gets facts and requests catalogs, I will > > probably call the facts method directly to just get the facts) > > - have the front end query these facts from PuppetDB > > > > 2. management - manage the process of bringing up a cluster from scratch > > > > This is the use case where puppet device is problematic. > > > > In this use case, an external system needs to specify how a collection of > > resources should be configured. The types of these resources are > > heterogeneous, for example: > > > > - Server > > - Storage > > - Network > > - add Port > > - create server > > > > These hardware configuration rules (and their dependencies) map pretty > > cleanly to the Puppet DSL and the Resource/Graph model. Where a manifests > > represents multiple devices and multiple endpoints. > > > > I had the following issues with puppet device for this use case: > > > > 1. It iterates through the endpoints and configures them one at a time > > > > This is probably the biggest barrier. I need to keep track of a > collection > > of resources that target multiple endpoints and apply them in a certain > > order. Looking at the device code it seems to just iterate through the > > endpoints in device.conf and configure them one at a time. > > I currently use a simple solution to work around this problem where > i create the device.conf through an external process on the fly and > specify my > devices and there dependencys in a yaml file, run them in order and just > check the exit code. > > it looks something like this: > > --- > defaults: > scheme: sshios > port: 22 > userinfo: foo:bar > query: crypt=true > cmd: /usr/bin/puppet device --verbose --environment=network > --detailed-exit-codes --deviceconfig={{DEVCFG}} || [ $? -eq 2 ] > > devices: > dc1: > sw-dc1-01.foo.bar: > deps: > - * > sw-dc1-02.foo.bar: > sw-dc1-03.foo.bar: > deps: > - sw-dc1-02.foo.bar > str-dc1-01.foo.bar: > scheme: netapp > deps: > - sw-dc1-01.foo.bar > > > > > > I spent some time thinking about the current device command and how I > might > > use it to configure workflows across multiple endpoints. > > - on the puppet master, keep a queue (or list) for each endpoint that > needs > > to be configured > > - have an external process (the dispatcher) that keeps track of the > > configuration that needs to be applied (along with their endpoints) and > > stores the resources that represent that configuration into the correct > > queue for it''s endpoint. > > - have an ENC that checks the certname of a device when it checks in, > maps > > it to a queue, and clears all entries for a queue (for it to apply) > > - If the dispatcher keeps track of all of the resources that it put onto > > which queue, it can track the report for those devices to know when it''s > > entire job is completed. > > > > The above explanation is the best way I could think of to use the > existing > > device, but it is cumbersome enough that it warrants not using the device > > model. > > > > 2. it does not allow for the specification of dependencies between > multiple > > device endpoints. It only allows for certain endpoints to be processed > in a > > certain order. > > > > This is pretty much the same as #1, but worth mentioning separately. > > > > 3. It invents its own command line for doing things (it does not cleanly > > operate with puppet resource, puppet apply, puppet agent with represents > a > > major loss of functionality) > > > > 4. Management of device.conf > > > > The existence of device.conf creates its own management issues. You need > to > > assign a single node to a single device, you have to manage the process > for > > getting the credentials to that device, you have to figure out how many > > devices/which devices go to which nodes as you scale out to a large > number > > of device endpoints. > > > > *Solution:* > > > > The transport model (as created by Nan Liu) seems to get around the > issues > > mentioned above and would allow a pretty clean integration path. > > > > For folks not familiar with the transport model. It uses regular types > and > > providers that accept a parameter called transport that can be used to > > indicate that it should be applied against some remote endpoint. > > > > For example: > > > > Transport { ''ssh'': > > url => some_url > > password => ''some_password'' > > } > > > > port { > > transport => Transport[ssh] > > } > > > > This will work perfectly for my use case. > > Can you point me to a thread where this was discussed ? > I can only see an advantage of the purposed model for certain > situations / device types but not for the traditional use case.There was some discussion related to this here: https://groups.google.com/d/msg/puppet-users/cT-9wYquDV8/BkdhJsxqv8gJ I think my view point is why not manage all API as a resource? What about database? dynamic DNS records? I think this opens some interesting options v.s. a silo config restricted to some box. If you want puppet device style of isolated puppet run you can achieve it with puppet agent/transport: node network_device_name { transport { ''f5'': username => ''admin'', password => ''pass'', } F5_resource { transport => Transport[''f5''], } ... } # This gets the catalog specific resource for node ''network_device_name'' {...}, which is really the same as puppet device with ''network_device_name'' in device.conf: puppet agent --certname <network_device_name> Nan -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/CACqVBqCU2Acrna%3DWz_z1kmbBYH4wbNv-VEiaj9xetSq9ToPTVw%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.