Sometimes linux-security gets a message stating that program xxx might be vulnerable to an yyy-attack (*). Sometimes people follow up stating that they couldn''t find an exploit. If I ask the test-squad to test some know-to-work exploit, I get about a 50/50 response. Half couldn''t get the published exploit to work. I''ve learned to interpret "nope it doesn''t work on my system" as containing little or no information. Certainly it should not be taken as a true statement. Now, when a possible security problem is found in some program that is either accessable from outside or running with setuid permissions, for some people this is a VERY serious problem. Others only care about outside -> inside bugs. Even more others are just interested in keeping informed, and won''t even take action on a bug with a published exploit. As a moderator for linux-security, I cannot take responsibility for your security policy. This means that I approve possible bugs even when there is no immediate threat. Suppose I have a machine doing internet-order-processing. I have 100 clients that use that service, each with a few employees who have an account on the order-entry machine. If I were responsible for the security on this machine I really wouldn''t want to wait for some genius to find an exploit for the current "not exploitable" buffer-overrun problems. On the other hand, a private machine that doesn''t have any other user-accounts than mine, I really wouldn''t take the trouble to fix it. However I find it frustrating to have bugs found, but not fixed. If in 5 years I find out that someone managed to exploit this buffer overrun on the then-current Red Hat 12.1 distribution, I''d be really pissed. We saw the possible bug, and should try to fix it somewhere in the "master source". I suggest we try to tag bug reports between Everybody should evaluate the consequences. and only security critical installations need to evaluate the consequences. I''d recommend "security critical" sites to evaluate the recent buffer overrun situations themselves(%). The maintainers I plead, please fix every "possible" security hole. Even when it isn''t proven that it''s exploitable (+). For others, I personally think that fixing this is too much trouble, for what it gains you. Roger Wolff. (*) Currently xxx == write or login, yyy == buffer-overrun (%) I -=am=- responsible for a security critical machine. Problem evaluated. Action taken: None. I -=do=- have to take that decision myself. (+) For example when you find for (i=1;i<argc-1;i++); handle_file (argv[i]); I really hope everybody maintaining a source would fix it even though nobody has reported a bug about "more than one argument" not working. (remove the "-1" and the ";" on the first line) P.S. I had a peek at the login-buffer overrun problem. Wether or not it is exploitable depends on the byte-order of your machine, and the implementation of your compiler. I think i386/gcc isn''t vulnerable. I can''t say a thing about other architectures or compilers (Possibly even omitting -O2 could change things). Do -=not=- read this as other architectures need to act on this problem. I don''t think it is exploitable, but for security critical applications, the person responsible should be the one taking the actual decision.
Benedikt Stockebrand
1997-Jan-23 15:15 UTC
Re: [linux-security] program xxx is not vulnerable.
R.E.Wolff@BitWizard.nl (Rogier Wolff) writes:> As a moderator for linux-security, I cannot take responsibility for > your security policy. This means that I approve possible bugs even > when there is no immediate threat.Aside from your assumed responsibility, I think we all should realize that security isn''t just a matter of plugging the latest bug. With this attitude we''ll move Linux into the same situtation sendmail is already in --- ``new'''' security holes found about monthly and no hope of a ``secure'''' version in the foreseeable future. I don''t blame Eric Allman for this --- he obviously started writing sendmail when no more than \epsilon people actually cared about network security and even less knew *how* to do it right --- but we ought to learn from this. That''s the point: If we stick fixing only security holes that are known to be exploitable or are already exploited there''ll always be folks who are the ones to get bitten before we move our butts. That sendmail experience (and the BSD lpr/lpd experience and...and...and...) should tell us to *design* everything to be safe. Considering the experiences with qmail this may even work if we start from scratch, but it doesn''t really seem feasible in many other cases. But no matter what we should at least *try* to do what''s reasonable, i.e. fix known holes before the shit hits the fan. Due to its freely available sources and open developer community Linux sure has all it really takes to do beat all commercial Un*ces at least in this respect. I know only little about the OpenBSD folks (yet...), but apparently they take a far more ``pro-active'''' approach. It seems like they''ve been systematically searching their source tree for the usual strcpy() calls causing buffer overruns and various other standard security hole patterns. Considering the various CERT advisories and other warnings in bugtraq and BoS they seem like they''ve found quite a few not-yet-exploited security holes that turned ``real'''' only a couple months later. Of course, unless they''ve already moved a great deal away from the NetBSD I know (1.1), they''ll be dealing with a far smaller source tree than the average Linux distribution. We probably can''t scan all the RedHat or Debian source trees like this, but at least we should try both to keep the base system as secure as possible and deal with potential problems as soon as they are found.> I suggest we try to tag bug reports between > > Everybody should evaluate the consequences. > and > only security critical installations need > to evaluate the consequences.I wouldn''t take the responsibility for classifying found security holes in this simplified fashion because I consider it highly over-simplified. There are problems that only relate to - sites connected to the Internet. - sites with potentially hostile users. - hosts providing a very special and uncommon service (e.g. AppleTalk over IP tunneling). - hosts with special hardware installed. - accounts with specific configurations. - specific distributions. - ... We might try something similar to the ``Impact'''' section in CERT Advisories though, stating the conditions under which an attack is possible. And if we dare we might also add a ``severity/urgency estimate''''. But this should be used mostly with linux-alert, not linux-security. After all, linux-security is to *discuss* security aspects of Linux, not to *announce* known problems. Ben -- Ben(edikt)? Stockebrand Runaway ping.de Admin---Never Ever Trust Old Friends My name and email address are not to be added to any list used for advertising purposes. Any sender of unsolicited advertisement e-mail to this address im- plicitly agrees to pay a DM 500 fee to the recipient for proofreading services.