I build code using static linking for deployment across a set of machines. For me this has a lot of advantages - I know that the code will run, no matter what the state of the ports is on the machine, and if there is a need to upgrade a library then I do it once on the build machine, rebuild the executable, and rsync it out to the leaf nodes. Only one place to track security updates, only one place where I need to have all the porst the code depends on installed. I recently wanted to use libdespatch, but I found that the port didn't install the static libraries. I filed a PR, and found out from the reponse that this was deliberate, and that a number of other ports were deliberately excluding static libraries too. Some good reasons where given, which I wont reporduce here, as you can read them at: http://www.freebsd.org/cgi/query-pr.cgi?pr=151306 Today I finally hit the problem where a critical library I am using has stopped working with static libraries (or so it appears at first glance). I was wondering what the general policy here was - should I just bite the bullet and go dynamic, and accept the maintannance headache that cases, or could we define something like 'WITH_STATIC_LIBRARIES' that could be set which would make ports install a set of static libraries (maybe into a separate /usr/local/lib/static?) so that the likes of me could continue to build static code ? I'd very much like to be able to continue to ship single executables that "just run", but if theres some policy to only have dynamic libraries in ports going forward then fair enough... -pete.
On Fri, Jan 14, 2011 at 02:07:37PM +0000, Pete French wrote:> I build code using static linking for deployment across a set of > machines. For me this has a lot of advantages - I know that the > code will run, no matter what the state of the ports is on the > machine, and if there is a need to upgrade a library then I do it > once on the build machine, rebuild the executable, and rsync it out > to the leaf nodes. Only one place to track security updates, only > one place where I need to have all the porst the code depends on > installed.> I recently wanted to use libdespatch, but I found that the port > didn't install the static libraries. I filed a PR, and found out > from the reponse that this was deliberate, and that a number of > other ports were deliberately excluding static libraries too. Some > good reasons where given, which I wont reporduce here, > as you can read them at: http://www.freebsd.org/cgi/query-pr.cgi?pr=151306> Today I finally hit the problem where a critical library I am using > has stopped working with static libraries (or so it appears at first glance). > I was wondering what the general policy here was - should I just bite the > bullet and go dynamic, and accept the maintannance headache that cases, or > could we define something like 'WITH_STATIC_LIBRARIES' that could be set > which would make ports install a set of static libraries (maybe into > a separate /usr/local/lib/static?) so that the likes of me could > continue to build static code ? I'd very much like to be able to continue > to ship single executables that "just run", but if theres some policy > to only have dynamic libraries in ports going forward then fair enough...Various features do not work with static linking because dlopen() does not work from static executables. Libraries that are also used by dlopen()ed modules should generally be linked dynamically, particularly if these libraries have global state. Things that use dlopen() include NSS (getpwnam() and the like), PAM and most "plugin" systems. If libc is statically linked, NSS falls back to a traditional mode that only supports the traditional things (e.g. no LDAP user information); I think PAM and most plugin systems do not work at all. For some system libraries, there can be kernel compatibility problems that prevent old libraries from working, although an ABI-compatible shared library is available. This has happened with 6.x's libkse: binaries statically linked to it do not run on 8.x or newer, while libkse can be remapped to libthr for binaries dynamically linked to libkse. For these reasons, static linking to libc, libpthread and similar system libraries should be reserved for /rescue/rescue and similar programs, and not used in general. Another feature only available with dynamic linking is hidden symbols that are available only inside the shared object. Compiling a library that uses this feature as a static library will make the hidden symbols visible to the application or other libraries. This may cause name clashes that otherwise wouldn't have been a problem or invite API abuse. Proper use of hidden symbols can also speed up linking and load times considerably, particularly if the code is written in C++. Other issues are static linking's requirement to list all libraries a library depends on and in the correct order. With dynamic linking, listing the indirect dependencies is unnecessary and best avoided. This is generally not very hard to fix but still needs extra effort. (For example, pkg-config has Libs.private to help with it.) If you want to link dynamically but avoid too much management overhead, consider using PCBSD's PBI system which allows you to ship all necessary .so files (except system ones) with your application. -- Jilles Tjoelker
On Fri, Jan 14, 2011 at 7:07 AM, Pete French <petefrench@ticketswitch.com> wrote:> I build code using static linking for deployment across a set of > machines. For me this has a lot of advantages - I know that the > code will run, no matter what the state of the ports is on the > machine, and if there is a need to upgrade a library then I do it > once on the build machine, rebuild the executable, and rsync it out > to the leaf nodes. Only one place to track security updates, only > one place where I need to have all the porst the code depends on > installed. >You still have to track security updates for the rest of your system. ssh for example. Kernel bugs too (though admittedly those are much more rare).
Being an old curmudgeon, I have historically preferred static linking for some of the points raised (no external installation dependencies, I know the application is complete and will work, use of believed good library versions, custom library patches are possible, etc). The other response about needing to be explicit at link time about the libraries in use is something I like in general. A programmer should know all the dependencies of his application. I find it really disgusting when some little application uses a library with a huge dependency base. This is especially true when the application doesn't use most of that base. Some applications pull needed library code into their own source tree and build their own versions. This can be useful in some cases. Some of the bloatware application do this to excess. If it is necessary, I don't know how anyone can make a proper and usable security declaration about an application which is dynamically linked against other libraries. On the other hand, the state of shared libraries is probably much better than the last time I actually cared. People have learned about maintaining backward compatibility, API versioning, etc. If you make good decisions about the libraries you depend upon life can be much better. Don't just select libraries that are flavor of the day but instead use libraries with good solid history and apparent ongoing support. The other response indicates that even libc with some of the core functionality doesn't work well with static linking. I'm disappointed about that state of things, but the reasons seem valid. It is also interesting about system version compatibility and providing shims for older ABIs. Also, in this day of interpreted code (perl, etc) much of that code base is equivalent to dynamically linked. At $DAYJOB I've seen a few things with their own versions of certain common libraries included. I've been curious about the reasons but haven't had time to investigate. The use of dlopen() also looks to make "plugins" much easier to do than with statically linked programs. Plugin equivalents in some of my earlier applications where much trickier to write since they also needed to be statically linked with a very reduced set of available support routines (basically none for the things I was doing at the time). Today, I probably wouldn't fight using dynamic linking. I do wish things would continue to provide static libraries unless there are specific reasons static libraries won't work. I would like to see libc remain fully functional when statically linked. I would like documentation about functionality lost when statically linking with libc. Mostly just some ramblings, Stuart -- I've never been lost; I was once bewildered for three days, but never lost! -- Daniel Boone
On Fri 14 Jan 2011 at 06:07:37 PST Pete French wrote:> >I recently wanted to use libdespatch, but I found that the port >didn't install the static libraries. I filed a PR, and found out >from the reponse that this was deliberate, and that a number of >other ports were deliberately excluding static libraries too. Some >good reasons where given, which I wont reporduce here, >as you can read them at: http://www.freebsd.org/cgi/query-pr.cgi?pr=151306 >Interesting reading. One thing bothers me, however, about the reasons given against static linking. Surely, if a port statically links to a library, it calls out that library on a LIB_DEPENDS line and the dependency is reflected in the package database? So, if a security issue comes up with the library, it wouldn't be difficult to flag the dependent port as one that needs to be recompiled using the newly-patched library? The user only gets the patches to the shared library after he reads and responds to the security notice, or when he's doing a normal update of his ports. Correct? Well then, what's different about the scenario when it's a static library? What am I missing here?
On Friday, 14 January 2011, Pete French <petefrench@ticketswitch.com> wrote:> I build code using static linking for deployment across a set of > machines. For me this has a lot of advantages - I know that the > code will run, no matter what the state of the ports is on the > machine, and if there is a need to upgrade a library then I do it > once on the build machine, rebuild the executable, and rsync it out > to the leaf nodes. Only one place to track security updates, only > one place where I need to have all the porst the code depends on > installed.I actually tried to compile a port against another and have it link statically, but I couldn't find a way to do so without hacking the configure script. I was wondering if there was another (and easier) way to do so... I use ldap for authentication purposes, along with pam_ldap and nss_ldap If I compile openldap-client against openssl from ports, then it creates massive problems elsewhere. For example, base ssh server will now crash due to using different libcrypto. compiling ports will also become impossible as bsd tar itself crash (removing ldap call from nsswitch.conf is required to work again) I was then advised in the freebsd forums to uninstall openssl port, compile openldap against openssl base, install it, then re-install openssl port. (I have to use openssl from ports with apache/subversion to fix a bug with TLSv1 making svn commit crash under some circumstances) I dislike this method, because should openldap gets upgraded again and be linked against openssl port, I will lock myself out of the machine again due to sshd crashing. Just like what happened today :( So how can I configure openldap-client to link against libssl and libcrypto statically? Thanks