banjer
2012-Aug-09 20:32 UTC
[Puppet Users] Error 400 on Server: Another local or imported resource exists with the type and title Sshkey
I am attempting to remove an old ssh host key from /etc/ssh/ssh_known_hosts. In my manifest, I have the following: # add keys @@sshkey { $hostname: ensure => present, type => "rsa", key => $sshrsakey, } # remove key @@sshkey { "foohost": ensure => absent, type => "rsa", } Sshkey <<| |>> But I get this error on puppet agents: root@harper~> puppet agent -t info: Retrieving plugin info: Loading facts in datacenter info: Loading facts in datacenter err: Could not retrieve catalog from remote server: Error 400 on SERVER: Another local or imported resource exists with the type and title Sshkey[foohost] on node harper warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run The "add keys" piece above has always worked great for dynamically adding to/managing the ssh_known_hosts file, but this is the first time I''ve tried to do ''ensure => absent'' for a specific host''s old key. I should note that the old host "foohost" had its OS rebuilt (was SLES, now CentOS) and I used the old IP on the new host. Not sure if that would affect it. The best I could find via Google was http://projects.puppetlabs.com/issues/11629, but it doesn''t provide any clues as to what needs to be cleaned out or if my manifest syntax is off. I also tried adding "Sshkey <<| |>>" after "add keys" AND after "remove key". I think I need to clean out stale something-or-other for foohost on all my nodes. Any ideas? Thank you thank you. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/FHYnbjSqRIcJ. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
kp-v
2012-Aug-09 20:57 UTC
[Puppet Users] Re: Error 400 on Server: Another local or imported resource exists with the type and title Sshkey
Does $hostname ever get set to $hostname in the add key section ? Also, can you show the results of: puppet resource sshkey foohost On Thursday, August 9, 2012 1:32:40 PM UTC-7, banjer wrote:> > I am attempting to remove an old ssh host key from > /etc/ssh/ssh_known_hosts. In my manifest, I have the following: > > # add keys > @@sshkey { $hostname: > ensure => present, > type => "rsa", > key => $sshrsakey, > } > > # remove key > @@sshkey { "foohost": > ensure => absent, > type => "rsa", > } > > Sshkey <<| |>> > > > But I get this error on puppet agents: > > > root@harper~> puppet agent -t > info: Retrieving plugin > info: Loading facts in datacenter > info: Loading facts in datacenter > err: Could not retrieve catalog from remote server: Error 400 on SERVER: > Another local or imported resource exists with the type and title > Sshkey[foohost] on node harper > warning: Not using cache on failed catalog > err: Could not retrieve catalog; skipping run > > > The "add keys" piece above has always worked great for dynamically adding > to/managing the ssh_known_hosts file, but this is the first time I''ve tried > to do ''ensure => absent'' for a specific host''s old key. I should note that > the old host "foohost" had its OS rebuilt (was SLES, now CentOS) and I used > the old IP on the new host. Not sure if that would affect it. > > The best I could find via Google was > http://projects.puppetlabs.com/issues/11629, but it doesn''t provide any > clues as to what needs to be cleaned out or if my manifest syntax is off. > I also tried adding "Sshkey <<| |>>" after "add keys" AND after "remove > key". > > I think I need to clean out stale something-or-other for foohost on all my > nodes. Any ideas? Thank you thank you. > >-- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/96PhfiUHpaEJ. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
jcbollinger
2012-Aug-09 21:31 UTC
[Puppet Users] Re: Error 400 on Server: Another local or imported resource exists with the type and title Sshkey
On Thursday, August 9, 2012 3:32:40 PM UTC-5, banjer wrote:> > I am attempting to remove an old ssh host key from > /etc/ssh/ssh_known_hosts. In my manifest, I have the following: > > # add keys > @@sshkey { $hostname: > ensure => present, > type => "rsa", > key => $sshrsakey, > } > > # remove key > @@sshkey { "foohost": > ensure => absent, > type => "rsa", > } > > Sshkey <<| |>> > > > But I get this error on puppet agents: > > > root@harper~> puppet agent -t > info: Retrieving plugin > info: Loading facts in datacenter > info: Loading facts in datacenter > err: Could not retrieve catalog from remote server: Error 400 on SERVER: > Another local or imported resource exists with the type and title > Sshkey[foohost] on node harper > warning: Not using cache on failed catalog > err: Could not retrieve catalog; skipping run > >Yes, exported resources need to be unique across the site, where "unique" is determined based on type and title. Also, you cannot collect an exported resource whose type and title are the same as a local resource. If the node that exported Sshkey[foohost] has been decommissioned then cleaning its configuration from your storeconfigs DB should get you 90% of the way to where you want to be. For the other 10%, change the explicit Sshkey[foohost] declaration from an exported to a local one, else every node will try to export it. That might not be a deal killer, now that I think about it, but it''s certainly ugly.> > The "add keys" piece above has always worked great for dynamically adding > to/managing the ssh_known_hosts file, but this is the first time I''ve tried > to do ''ensure => absent'' for a specific host''s old key. I should note that > the old host "foohost" had its OS rebuilt (was SLES, now CentOS) and I used > the old IP on the new host. Not sure if that would affect it. >If there is a new foohost that is exporting a new key, then none of this should be necessary. Puppet ought to replace the old ssh_known_hosts entry with the new one.> > The best I could find via Google was > http://projects.puppetlabs.com/issues/11629, but it doesn''t provide any > clues as to what needs to be cleaned out or if my manifest syntax is off. > I also tried adding "Sshkey <<| |>>" after "add keys" AND after "remove > key". >Collecting the same resources twice, or collecting them at a different place in your manifest still leaves you with duplicate resource declarations.> > I think I need to clean out stale something-or-other for foohost on all my > nodes. Any ideas? Thank you thank you. > >If there is a new "foohost" client then you may not need to do anything. If not, then yes, you should clear its configuration out of your storeconfigs DB. John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/7ofQk9-yt4oJ. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
banjer
2012-Aug-10 14:50 UTC
[Puppet Users] Re: Error 400 on Server: Another local or imported resource exists with the type and title Sshkey
> If there is a new "foohost" client then you may not need to do anything. > If not, then yes, you should clear its configuration out of your > storeconfigs DB. > >Its a new hostname as well as a new key. I wasn''t clear on that earlier. Also, I had run `puppet node clean foohost` before fyi. Lets call the old host *foohost* and the new one *newhost.* My goal is to have 50 hosts with the same ssh_known_hosts file, which will contain the keys for the 50 hosts, so from what I understand I need to use sshkey as an "exported" resource. Perhaps I''m not understanding local vs exported resources though. It seems to me that if if the hostnames are different, then there shouldn''t be a problem with the two resource declarations coexisting in my manifest, as the type-title combo should be unique, right? A solution I''ve come up with is to have ONLY this declared: # remove key @@sshkey { "foohost": ensure => absent, type => "rsa", } Sshkey <<| |>> and then let my puppet agents pull down their configs and thus handle the removal of foohost from ssh_known_hosts. Later today, I''ll remove this declaration and put back in: # add keys @@sshkey { $hostname: ensure => present, type => "rsa", key => $sshrsakey, } Sshkey <<| |>> Not the prettiest solution, but this situation where we rebuild a host with a new hostname isn''t that common. Now, with all that said, I can see in my storedconfigs DB which is also shared by Foreman, that there are some records for sshkey and foohost that still exist. Not sure how to clean this out (is puppet node clean foohost the correct way?), other than a postgres query. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/hyewxsFQxA4J. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
jcbollinger
2012-Aug-10 21:29 UTC
[Puppet Users] Re: Error 400 on Server: Another local or imported resource exists with the type and title Sshkey
On Friday, August 10, 2012 9:50:00 AM UTC-5, banjer wrote:> > > If there is a new "foohost" client then you may not need to do anything. >> If not, then yes, you should clear its configuration out of your >> storeconfigs DB. >> >> > Its a new hostname as well as a new key. I wasn''t clear on that > earlier. Also, I had run `puppet node clean foohost` before fyi. Lets > call the old host *foohost* and the new one *newhost.* > > My goal is to have 50 hosts with the same ssh_known_hosts file, which will > contain the keys for the 50 hosts, so from what I understand I need to use > sshkey as an "exported" resource. Perhaps I''m not understanding local vs > exported resources though. >Exported resources are a good choice for this purpose. They allow each node to declare its key on behalf of all the others (and it itself), which can be darn convenient. This is exactly the sort of thing they are designed for. The characteristics distinguishing exported resources from ordinary resources are 1. they are accessible to all nodes, not just the one that declares them, 2. they are added to the catalogs of only those nodes that collect them (which do not have to include the nodes that declare them), and 3. there is no 3 It is because of (1) that exported resources'' (type, title) combinations should be unique across the site. It is because there is no 3, etc. that exported nodes'' (type, title) cannot duplicate those of resources declared locally on the nodes that collect them. Ultimately, those both follow from what I suspect is the key point you''re missing: exported resources are no different from any others once they are collected.> > It seems to me that if if the hostnames are different, then there > shouldn''t be a problem with the two resource declarations coexisting in my > manifest, as the type-title combo should be unique, right? >You effectively extend the contents of your manifest to include the declarations of all the exported resources you collect. So it *is* a problem if your manifest declares a resource (whether plain, virtual, or exported) that matches one it collects elsewhere.> A solution I''ve come up with is to have ONLY this declared: > > # remove key > @@sshkey { "foohost": > ensure => absent, > type => "rsa", > } > >I''m supposing that the class containing that declaration is assigned to every node, or at least to every one that in the group that are sharing keys. So every node is going to export that Sshkey and collect it (or some other node''s copy of it). Why? Every node already knows the key is supposed to be absent, so it doesn''t need any of the others to tell it that. It would be better, therefore, to make the resource an ordinary one. Generally speaking, exported resources should always be specific to the node exporting them. At this point you may be stuck, however. Making the resource local is a problem if nodes are going to collect another copy of the same resource. Ordinarily you would expect cleaning foohost''s config from the DB to resolve that (thus you would do so after decommissioning foohost but before declaring its key absent on your other nodes), but now that you have all your other nodes exporting Sshkey[''foohost''] you have no easy way to clear out all those exported records.> Sshkey <<| |>> > > and then let my puppet agents pull down their configs and thus handle the > removal of foohost from ssh_known_hosts. Later today, I''ll remove this > declaration and put back in: > > # add keys > @@sshkey { $hostname: > ensure => present, > type => "rsa", > key => $sshrsakey, > } > > Sshkey <<| |>> > > Not the prettiest solution, but this situation where we rebuild a host > with a new hostname isn''t that common. > > Now, with all that said, I can see in my storedconfigs DB which is also > shared by Foreman, that there are some records for sshkey and foohost that > still exist. Not sure how to clean this out (is puppet node clean foohost > the correct way?), other than a postgres query. >Since you ran puppet node clean (after foohost was decommissioned, I presume) I would think that the records you are now seeing for Sshkey[foohost] are the ones being exported by the other nodes. You are begging for trouble (and indeed have found some) when you export resources that are not specific to the nodes for which they are declared. This is the procedure I would recommend in the future: 1. Decommission a node, "foohost" for example. 2. Once you are confident that foohost will never again contact the puppetmaster, clean its configuration out of your storeconfig DB by running "puppet node clean foohost" on the master 3. Declare *local* resources on all your nodes to clean out any of foohost''s exported resources that were previously collected and applied. That would be very much like what you actually did, but as local resources instead of exported ones. The local declarations added to cleaning out resources previously collected from the late foohost can stay around as long as you wish. That can be convenient if you have to accommodate the possibility of some nodes not checking in within a narrow window (e.g. laptops or machines down for maintenance, or if you use an extended sync interval). Alternatively, before you decommission a node, you could make it itself export "ensure => absent" versions of its exported resources. In that case you would want to *avoid* cleaning those out of the database, at least until you''re confident that all the other nodes have collected and applied them. It wouldn''t be too bad to leave them indefinitely, especially if you''re using thin storeconfigs. With either of those you would not have to switch from ordinary operations to cleanup mode and back; instead the cleanup would be a natural part of your ordinary operations. John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/PQ3027jL7dgJ. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.