Bennett Haselton
2011-Dec-28 03:13 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
Suppose I have a CentOS 5.7 machine running the default Apache with no extra modules enabled, and with the "yum-updatesd" service running to pull down and install updates as soon as they become available from the repository. (Assume further the password is strong, etc.) On the other hand, suppose that as the admin, I'm not subscribed to any security alert mailing lists which send out announcements like "Please disable this feature as a workaround until this hole is plugged", so the machine just hums along with all of its default settings. So the machine can still be broken into, if there is an unpatched exploit released in the wild, in the window of time before a patch is released for that update. On the other hand, at any point in time where there are no unpatched exploits in the wild, the machine should be much harder to break into. Roughly what percent of the time is there such an unpatched exploit in the wild, so that the machine can be hacked by someone keeping up with the exploits? 5%? 50%? 95%? Hopefully this is specific enough that the answer is not "it depends" :) , an actual numeric answer should exist -- although I don't know if anyone has ever tried to work it out. But if not, then what's a good guess, based on observing how frequently root exploits are released in the wild, and how long the patches usually take. Bennett
Karanbir Singh
2011-Dec-28 03:29 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
On 12/28/2011 03:13 AM, Bennett Haselton wrote:> Roughly what percent of the time is there such an unpatched exploit in the > wild, so that the machine can be hacked by someone keeping up with the > exploits? 5%? 50%? 95%?there is no way to tell, and there is no metric to work against unless there is some source that can identify exactly when and how a specific exploit was discovered ( but then again, many exploits are not reported by the people who find them, they just abuse those exploits till such time as they can ) - KB
Gilbert Sebenste
2011-Dec-28 03:33 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
On Tue, 27 Dec 2011, Bennett Haselton wrote:> Suppose I have a CentOS 5.7 machine running the default Apache with no > extra modules enabled, and with the "yum-updatesd" service running to pull > down and install updates as soon as they become available from the > repository. > > So the machine can still be broken into, if there is an unpatched exploit > released in the wild, in the window of time before a patch is released for > that update. > > Roughly what percent of the time is there such an unpatched exploit in the > wild, so that the machine can be hacked by someone keeping up with the > exploits? 5%? 50%? 95%?There's no way to give you an exact number, but let me put it this way: If you've disable as much as you can (which by default, most stuff is disabled, so that's good), and you restart Apache after each update, your chances of being broken into are better by things like SSH brute force attacks. There's always a chance someone will get in, but when you look at the security hole history of Apache, particularly over the past few years, there have been numerous CVE's, but workarounds and they aren't usually earth-shattering. Very few of them have. The latest version that ships with 5.7 is as secure as they come. If it wasn't, most web sites on the Internet would be hacked by now, as most run Apache. ******************************************************************************* Gilbert Sebenste ******** (My opinions only!) ****** *******************************************************************************
Eero Volotinen
2011-Dec-28 16:03 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
http://www.awe.com/mark/blog/20110727.html -- Eero
Lamar Owen
2011-Dec-30 15:15 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
On Wednesday, December 28, 2011 10:38:30 PM Craig White wrote:> the top priority was to get the machine back online? > > Seems to me that you threw away the only opportunity to find out what > you did wrong and to correct that so it doesn't happen again. You are > left to endlessly suffer the endless possibilities and the extreme > likelihood that it will happen again.Agreed 100%. There is an old saying that applies here 'penny wise but pound foolish.' While getting back up quickly is a definite goal, fixing the underlying issue (which can only be done when the underlying issue is known!) is a far more important issue. If downtime cannot be tolerated for a thorough investigation, then the high-availability plan needs to be adjusted to provide failover to another box/VM while the compromised box/VM is investigated. As to the OP's original question about statistics, it seems to me that such statistics are useless for predictive analysis, and no matter how much history you have of past exploit timing this will not and cannot accurately predict the next exploit's timing or the exploitability of the next issue. Now for risk assessment it might be useful to have some sort of metric, with the knowledge that no risk assessment is an accurate predictor of future exploitability.
Lamar Owen
2011-Dec-30 15:24 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
On Tuesday, December 27, 2011 10:13:12 PM Bennett Haselton wrote:> Roughly what percent of the time is there such an unpatched exploit in the > wild, so that the machine can be hacked by someone keeping up with the > exploits?While I did reply elsewhere in the thread, I want to address this specifically. I can give you a percentage number very easily. The answer is 100%. There is always an unpatched exploit in the wild; just because it's not been found by the upstream vendor (and by extension the CentOS project) doesn't mean it's not being used in the wild. I would hazard to say the risk from an unknown, but used, exploit is far greater than the 'window of opportunity' exploits you seem to be targeting. I would also hazard to say that it would be similar in risk to 'window of opportunity' exploit timing in the Windows world; not because the OS's are similar in terms of security but because 'window of opportunity' exploit timing is the same regardless of the general security of the OS. And I think studies of 'window of opportunity' exploits have been done and are publicly available. I say this after having performing a risk assessment of our infrastructure myself, incidentally. It's not a matter of 'if' you will be hacked, but 'when,' and this is being acknowledged in high-level security circles. So you plan your high-availability solution accordingly, and plan for outages due to security issues just like you'd plan for network or power outages. This is becoming standard operating procedure in many places.
Lamar Owen
2011-Dec-30 15:35 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
On Thursday, December 29, 2011 12:33:41 PM Ljubomir Ljubojevic wrote:> If you use denyhosts or fail2ban, attacker needs 10,000 attack PC's that > never attacked any denyhosts or fail2ban server in recent time.That would be a very small botnet. And with gamers out there with CUDA-capable GPU's getting botted...... The scale of the botnets doing brute-forcing (among other nefariousness) should never be underestimated. In addition to fail2ban, simple user-based login timeouts and lockouts can be used that survive botnet brute-forcing, but are DoS sitting ducks because of it. Security is a hard problem. There is no magic bullet. Recent news should show that....
Lamar Owen
2011-Dec-30 15:58 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
On Friday, December 30, 2011 10:24:15 AM Johnny Hughes wrote:> Agree with this. At the very least, some kind of image (dd) of the > original disk for further study even if you have to get the machine back > on line and you don't have a failover machine.Speaking of dd, ddrescue in my experience is faster, but even then you will have downtime during the imaging. For a large drive this can easily take hours; I did a 500GB drive yesterday on my laptop using CentOS 6.2+the EPEL ddrescue package (LiveCD on a USB stick, by the way, with a 1GB overlay) and using a USB 3.0 Western Digital 2.5 inch external; took roughly the same time as eSATA; about 4.5 hours, even over USB 3.0 (the CentOS 6.2 Live media's kernel fully supports my USB 3.0 ExpressCard controller, by the way). I did this because the laptop's internal hard drive is doing sector reallocations; there are 97 reallocated sectors at this point, which can be a predictor of drive failure, so it was time to image it.... (Les is likely to mention clonezilla at this point.....but my partitioning and use of unallocated space for things precludes clonezille; tried it, didn't work). CAINE or similar tool would work jsut as well; but I'd rather set up a CentOS USB stick to do it rather than yet another distribution, even though CAINE has uses beyond just imaging and should be seriously considered for forensics even for die-hard CentOS users; the Fedora-based NST is a close second in my book. Something like CAINE or NST based on CentOS would be fun..... :-) Now, SAN snapshotting or VM snappshotting can help reduce downtime (take the snap, start imaging the snap, re-image/re-install to the underlying LUN/volume while the snap is imaging, then blow away the snap once it's imaged; requires lots of space for the snap delta files but the downtime is only the time required to take the snap (extremely quick on SAN, slightly less quick on something like vSphere) plus the time to reimage/reinstall to the underlying LUN/volume). Even here a large VM can take hours to re-image/re-install. Better to plan ahead for failover while forensic imaging takes place.
Lamar Owen
2011-Dec-30 16:51 UTC
[CentOS] what percent of time are there unpatched exploits against default config?
On Friday, December 30, 2011 11:19:46 AM Marko Vojinovic wrote:> You are basically saying that, given enough resources, you can precalculate > all hashes for all possible passwords in advance.> Can the same be said for keys? Given enough resources, you could precalculate > all possible public/private key combinations, right?Public key crypto's security is based on the cost of factoring and finding large prime numbers; hashing is somewhat different and relies on 'one-way' functions that are very difficult to reverse. There are similarities and some sharing between the algorithms, but the difficulty of reversal is based on different mathematical properties. However, at least for some hashes on some OS's, precalculation of partial hashes is no theory; lookup 'Rainbow Tables' some day. (see https://en.wikipedia.org/wiki/Rainbow_tables )