Reading about systemd, it seems it is not well liked and reminiscent of Microsoft's "put everything into the Windows Registry" (Win 95 onwards). Is there a practical alternative to omnipresent, or invasive, systemd ? -- Regards, Paul. England, EU. Centos, Exim, Apache, Libre Office. Linux is the future. Micro$oft is the past.
On 07/07/2014 07:47 PM, Always Learning wrote:> Reading about systemd, it seems it is not well liked and reminiscent of > Microsoft's "put everything into the Windows Registry" (Win 95 onwards). > > Is there a practical alternative to omnipresent, or invasive, systemd ? >So you are following the thread on the Fedora list? I have been ignoring it. Best I can tell is learn it and use it. And if you have any services, fix them so that they work with systemd. I work with one that does not and it is very slow to complete its startup.
On Tue, Jul 08, 2014 at 12:47:38AM +0100, Always Learning wrote:> Reading about systemd, it seems it is not well liked and reminiscent of > Microsoft's "put everything into the Windows Registry" (Win 95 onwards). > > Is there a practical alternative to omnipresent, or invasive, systemdTo the tune of YMCA Young man, you don't like systemd Oh young man, you get no sympathy Young man, you will find that your luck Is slowly running out with Linux So young man, if you want to stick To something, that more resembles Unix And young man, if you want to sing Goodbye to Poettering, (bah bah bah bah) FreeeeeeeeeBSD (yeah yeah yeah) FreeeeeeeeeBSD etc. I just made this up at work today, and that's as far as I got. -- Scott Robbins PGP keyID EB3467D6 ( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 ) gpg --keyserver pgp.mit.edu --recv-keys EB3467D6
On 07/07/2014 06:47 PM, Always Learning wrote:> Reading about systemd, it seems it is not well liked and reminiscent of > Microsoft's "put everything into the Windows Registry" (Win 95 onwards). > > Is there a practical alternative to omnipresent, or invasive, systemd ? >The answer to this is no, replacing systemd with something else is just way to invasive. Since new versions of CentOS, Ubuntu, Debian, RHEL, Fedora, OpenSUSE, Arch, Mageia and other Linux distros are all switching to systemd as the default .. I would suggest that learning how to use it is going to be the way to go. Of course, there are alternatives, including using CentOS-6 until 2020. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: <http://lists.centos.org/pipermail/centos/attachments/20140708/1b52eea8/attachment-0003.sig>
Johnny Hughes wrote:> On 07/07/2014 06:47 PM, Always Learning wrote: >> Reading about systemd, it seems it is not well liked and reminiscent of >> Microsoft's "put everything into the Windows Registry" (Win 95 onwards). >> >> Is there a practical alternative to omnipresent, or invasive, systemd ? >> > > The answer to this is no, replacing systemd with something else is just > way to invasive. > > Since new versions of CentOS, Ubuntu, Debian, RHEL, Fedora, OpenSUSE, > Arch, Mageia and other Linux distros are all switching to systemd as the > default .. I would suggest that learning how to use it is going to be > the way to go.I just hope that the distribution of implementing systemd are not so shortsighted (or rather pushing force) as Fedora/RHEL and besides him also offer other alternative (OpenRC is IMO very good candidate, although with sysvinit and upstart I was also satisfied - both did _reliably_ their jobs). Franta Hanzlik
On 07/08/2014 11:37 AM, Always Learning wrote:> Please see the link above. I used it to find the 'stateless' item, and > after selecting it clicked on > > http://0pointer.de/blog/projects/stateless.htmlThere are many use cases involving servers where such a capability would be highly desirable. Most are cloud-oriented, where you want to spin up an instance rapidly (to deal with increased load, perhaps) and then spin it down, and having dynamically loaded /etc and /var content allows this in a smooth manner. Static servers have their uses, of course, but at least in my data center I find actual server load to be very dynamic, but power load to be rather static; why *shouldn't* the power used be proportional to the work load? The real promise of 'cloud' technology is for us in in-house servers that can spin up only when needed, saving power and cooling costs in the process. Stateless is not the only way to go, of course, and nothing in the blog post to which you link is 'never again honor anything in /etc and /var' to be found, but rather, much like /etc serves as a fall-back for many programs who look first in a dot-file in ~, the content in /usr serves as an OS-default fallback to the per-system (or per-instance) configuration and state in /etc and /var. It is a different way of looking at things, for sure, but I can definitely see a server use-case for this sort of thing, especially since there is significant budget pressure to reduce power costs. And dynamic spinup of servers to handle increased load is a use case for systemd's rapid bootup. They go hand-in-hand. The Unix philosophy unfortunately sometimes misses the forest for all of the trees. Sometimes tools need to actually be designed to work together, and sometimes a Swiss Army Knife is the right thing to have. (And I'm an old Unix hack, too, having used Unix of several flavors since before Linux was even a gleam in Linus' eyes).
On 07/08/2014 12:06 PM, Les Mikesell wrote:> Don't know about your servers, but ours take much, much longer for > their boot-time memory and hardware tests and initialization than > anything the old style sysvinit scripts do.Physical servers can be told to skip certain parts of their POST, especially the memory test. Memory tests are redundant with ECC. (I know; I have an older SuperMicro server here that passes memory testing in POST but throws nearly continuous ECC errors in operation; it does operate, though). If it fails during spinup, flag the failure while spinning up another server. Virtual servers have no need of POST (they also don't save as much power; although dynamic load balancing can do some predictive heuristics and spin up host hypervisors as needed and do live migration of server processes dynamically). To detect failures early, spin up every server in a rotating sequence with a testing instance, and skip POST entirely. If you have to, spin up the server in a stateless mode and put it to sleep. Then wake it up with dynamic state. There are alot of possibilities here, if you're willing to think outside the 1970's timesharing minicomputer box that gave rise to the historical Unix philosophy. And this has nothing to do with Windows; I have been a primarily-Linux user since 1997. Long POSTs need to go away, with better fault tolerance after spinup being far more desirable, much like the promise of the old as dirt Tandem NonStop system. (I say the 'promise' rather than the 'implementation' for a reason.....).
On 07/07/2014 06:47 PM, Always Learning wrote:> Reading about systemd, it seems it is not well liked and reminiscent of > Microsoft's "put everything into the Windows Registry" (Win 95 onwards). > > Is there a practical alternative to omnipresent, or invasive, systemd ? >I hate to say it ... but all the Blovaiting we might want to do or not do in support of or in opposition to systemd does not matter with respect to CentOS 7. RHEL 7 has it, so CentOS 7 has it. Use CentOS 7 or don't ... your choice. If you want to replace systemd and you can figure out how .. do it. If it works and you want to get into a SIG, great then start one. If you want do discuss the mechanisms for removing systemd and collaborate about doing it via patches and changes to some package(s) in CentOS 7 .. great. Fork the packages from git.centos.org and go to github, start coding with your friends .. you can use the centos-devel mailing list to discuss the changes. But failing that, lets try to close down the thread a bit unless there is really something constructive to add. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: <http://lists.centos.org/pipermail/centos/attachments/20140708/2a2fa428/attachment-0003.sig>
On 07/08/2014 01:22 PM, John R Pierce wrote:> On 7/8/2014 9:25 AM, Lamar Owen wrote: >> Physical servers can be told to skip certain parts of their POST, >> especially the memory test. Memory tests are redundant with ECC. > but, you HAVE to zero ALL of memory with ECC to initialize it. >True enough; but this shouldn't take five minutes on a server with multiple GB/s memory bandwidth. My Dell 6950's take a full five minutes to POST, and that's ridiculous. There's eight cores; each core has enough bandwidth to its local RAM (NUMA, of course) where it should be able to sustain 2GB/s zeroing without a lot of trouble; that's a rate of 16GB/s aggregate, and my 32GB of RAM should be zeroed in 2 seconds or so. Not five minutes. It's still not as bad as our Sun Enterprise 6500 with 18GB, though, which takes about a minute per GB, which is also ridiculous (it's also NUMA, and the Sun firmware does start up each CPU to test it's own local RAM blocks).
On 07/08/2014 01:27 PM, Les Mikesell wrote:> On Tue, Jul 8, 2014 at 11:25 AM, Lamar Owen <lowen at pari.edu> wrote: >> Memory tests are redundant with ECC. (I >> know; I have an older SuperMicro server here that passes memory testing >> in POST but throws nearly continuous ECC errors in operation; it does >> operate, though). If it fails during spinup, flag the failure while >> spinning up another server. > I don't think that is generally true. I've seen several IBM systems > disable memory during POST and come up running will a smaller amount.Yes, and I have a few Dells that do that as well. Unfortunately most OS's aren't 'hotplug/unplug' for RAM, which would alleviate the need to tag it out during POST. But perhaps some of today's and yesterday's hardware just isn't up to the task of reliable rapid power on. So perhaps I should have written 'Memory tests should be redundant with ECC.'> Our servers tend to just run till they die. If we didn't need them we > wouldn't have bought them in the first place. I suppose there are > businesses with different processes that come and go, but I'm not sure > that is desirable.Our load graphs here are very spurty, with the spurts going very high during certain image reduction processes. It is to the point where I could probably save money by putting a few of the more power hungry systems that have spurty loads on a timed sleep basis, which WoL bringing them back up prior to the next day's batch. But that's an ad hoc solution, and I really don't like ad hoc solutions when infrastructure ones are available and better tested.> If you need load balancing anyway you just run enough spares to cover > the failures. >And pay the power bill for them
On 07/09/2014 01:00 PM, John R Pierce wrote:> i find the biggest part of server POST is all the storage and network > adapter bios's need to get in there, scan the storage busses, > enumerate raids, initialize intel boot agents, and so forth.I've found that disabling all but the boot device's BIOS works wonders and makes installs far happier, with the exception of real hardware RAID cards. The Linux kernel is quite happy doing any and all fibre-channel enumeration with the HBA's BIOS turned off. (All my large storage is FC and iSCSI SAN). And the 'Intel boot agent' only lives long enough to PXE boot if I need that. The 3Ware 9500's I have typically take a bit longer and require the BIOS, though, but with a small array that's a few tens of seconds, a minute tops. That's one advantage of the Linux mdraid...... But our 6950's spend five minutes only on the memory test; that's not counting the Dell PERC boot device enumeration and drive spinup. The fastest booting servers I have are our two pfSense firewalls; I've trimmed the BIOS setup to the bone and those boxes reboot in a few tens of seconds. (Yeah, I count a firewall as a server, since it's running on server hardware (Intel 5000X-based quad core dual Xeons with 4GB of RAM each; does wire-speed with > one million pf table entries on a 1Gb/s WAN link) and providing an essential network service to the rest of the hosts on the network). But, point taken, since there's more to a POST than just the memory test.
On 2014-07-07, Always Learning <centos at u62.u22.net> wrote:> Reading about systemd, it seems it is not well liked and reminiscent of > Microsoft's "put everything into the Windows Registry" (Win 95 onwards).Has anyone here actually interacted with systemd, and if so could you perhaps provide a writeup of your experiences? I feel like I haven't seen any practical information on systemd in this thread, and I'd like to have that before forming an initial opinion (at which point I'd attempt to interact with it myself in order to form a better informed opinion). --keith -- kkeller at wombat.san-francisco.ca.us
On 07/15/2014 09:38 AM, Jonathan Billings wrote:> On Mon, Jul 14, 2014 at 09:50:18PM -0700, Keith Keller wrote: >> I think this could be very useful, especially coming from someone who >> was initially reluctant (as I and clearly others are). > Ok, I'll give some examples of my experiences. Warning: long post. > > ... > When I started using and writing my own systemd units, I found them > quite simple, ...> So, the things that have bothered me so far: > ...Jonathan, thanks for the balanced treatment and for posting actual experience and not just regurgitating tired tropes.
On 07/15/2014 11:33 AM, m.roth at 5-cent.us wrote:> This one does bother me. I may not want to restart a production > instance of apache, when all I want it to do is reload the > configuration files, so that one site changes while the others are all > running happily as clams.systemctl reload $unit Documented in the systemctl(1) man page. If the unit(s) you want to reload don't support that, and you want to reload more than one unit's configuration in one command, you use systemctl reload-or-restart $unit (I've wanted that one for a while, and 'service' doesn't do that, along with globbing of the name; that is 'systemctl reload-or-restart httpd*' (with proper quoting) will restart or reload all running units that match the glob; yeah, now on my load-balanced multiple-frontends plone installation I could 'systemctl reload-or-restart plone-*' and it will do the right thing, no matter how many frontend instances I have selected for running.... That's actually pretty cool. There are quite a few of the commands that systemctl supports that I have wanted for 'service' for a long time.