Bennett Haselton
2012-Jan-01 22:23 UTC
[CentOS] an actual hacked machine, in a preserved state
(Sorry, third time -- last one, promise, just giving it a subject line!) OK, a second machine hosted at the same hosting company has also apparently been hacked. Since 2 of out of 3 machines hosted at that company have now been hacked, but this hasn't happened to any of the other 37 dedicated servers that I've got hosted at other hosting companies (also CentOS, same version or almost), this makes me wonder if there's a security breach at this company, like if they store customers' passwords in a place that's been hacked. (Of course it could also be that whatever attacker found an exploit, was just scanning that company's address space for hackable machines, and didn't happen to scan the address space of the other hosting companies.) So, following people's suggestions, the machine is disconnected and hooked up to a KVM so I can still examine the files. I've found this file: -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl which appears to be a copy of this exploit script: http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html Note the last-mod date of October 21. No other files on the system were last modified on October 21st. However there was a security advisory dated October 20th which affected httpd: http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+CESA20111392+Moderate+CentOS+5+i386+httpd+Update https://rhn.redhat.com/errata/RHSA-2011-1392.html and a large number of files on the machine, including lots of files in */ usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , have a last-mod date of October 20th. So I assume that these are files which were updated automatically by yum as a result of the patch that goes with this advisory -- does that sound right? So a couple of questions that I could use some help with: 1) The last patch affecting httpd was released on October 20th, and the earliest evidence I can find of the machine being hacked is a file dated October 21st. This could be just a coincidence, but could it also suggest that the patch on October 20th introduced a new exploit, which the attacker then used to get in on October 21st? (Another possibility: I think that when yum installs updates, it doesn't actually restart httpd. So maybe even after the patch was installed, my old httpd instance kept running and was still vulnerable? As for why it got hacked the very next day, maybe the attacker looked at the newly released patch and reverse-engineered it to figure out where the vulnerabilities were, that the patch fixed?) 2) Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5 weeks by default, it looks like any log entries related to how the attacker would have gotten in on or before October 21st, are gone. (The secure* logs do show multiple successful logins as "root" within the last 4 weeks, mostly from IP addresses in Asia, but that's to be expected once the machine was compromised -- it doesn't help track down how they originally got in.) Anywhere else that the logs would contain useful data?
Eero Volotinen
2012-Jan-01 22:55 UTC
[CentOS] an actual hacked machine, in a preserved state
2012/1/2 Bennett Haselton <bennett at peacefire.org>:> (Sorry, third time -- last one, promise, just giving it a subject line!) > > OK, a second machine hosted at the same hosting company has also apparently > been hacked. ?Since 2 of out of 3 machines hosted at that company have now > been hacked, but this hasn't happened to any of the other 37 dedicated > servers that I've got hosted at other hosting companies (also CentOS, same > version or almost), this makes me wonder if there's a security breach at > this company, like if they store customers' passwords in a place that's > been hacked. ?(Of course it could also be that whatever attacker found an > exploit, was just scanning that company's address space for hackable > machines, and didn't happen to scan the address space of the other hosting > companies.) > > So, following people's suggestions, the machine is disconnected and hooked > up to a KVM so I can still examine the files. ?I've found this file: > -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl > which appears to be a copy of this exploit script: > http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html > Note the last-mod date of October 21. > > No other files on the system were last modified on October 21st. ?However > there was a security advisory dated October 20th which affected httpd: > http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+CESA20111392+Moderate+CentOS+5+i386+httpd+Update > https://rhn.redhat.com/errata/RHSA-2011-1392.html > > and a large number of files on the machine, including lots of files in */ > usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , > have a last-mod date of October 20th. ?So I assume that these are files > which were updated automatically by yum as a result of the patch that goes > with this advisory -- does that sound right? > > So a couple of questions that I could use some help with: > > 1) The last patch affecting httpd was released on October 20th, and the > earliest evidence I can find of the machine being hacked is a file dated > October 21st. ?This could be just a coincidence, but could it also suggest > that the patch on October 20th introduced a new exploit, which the attacker > then used to get in on October 21st? > ? ?(Another possibility: I think that when yum installs updates, it > doesn't actually restart httpd. ?So maybe even after the patch was > installed, my old httpd instance kept running and was still vulnerable? As > for why it got hacked the very next day, maybe the attacker looked at the > newly released patch and reverse-engineered it to figure out where the > vulnerabilities were, that the patch fixed?) > > 2) Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5 > weeks by default, it looks like any log entries related to how the attacker > would have gotten in on or before October 21st, are gone. ?(The secure* > logs do show multiple successful logins as "root" within the last 4 weeks, > mostly from IP addresses in Asia, but that's to be expected once the > machine was compromised -- it doesn't help track down how they originally > got in.) ?Anywhere else that the logs would contain useful data?sshd with root login enabled with very bad password? -- Eero
Rilindo Foster
2012-Jan-02 00:57 UTC
[CentOS] an actual hacked machine, in a preserved state
On Jan 1, 2012, at 5:23 PM, Bennett Haselton <bennett at peacefire.org> wrote:> (Sorry, third time -- last one, promise, just giving it a subject line!) > > OK, a second machine hosted at the same hosting company has also apparently > been hacked. Since 2 of out of 3 machines hosted at that company have now > been hacked, but this hasn't happened to any of the other 37 dedicated > servers that I've got hosted at other hosting companies (also CentOS, same > version or almost), this makes me wonder if there's a security breach at > this company, like if they store customers' passwords in a place that's > been hacked. (Of course it could also be that whatever attacker found an > exploit, was just scanning that company's address space for hackable > machines, and didn't happen to scan the address space of the other hosting > companies.) > > So, following people's suggestions, the machine is disconnected and hooked > up to a KVM so I can still examine the files. I've found this file: > -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl > which appears to be a copy of this exploit script: > http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html > Note the last-mod date of October 21. > > No other files on the system were last modified on October 21st. However > there was a security advisory dated October 20th which affected httpd: > http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+CESA20111392+Moderate+CentOS+5+i386+httpd+Update > https://rhn.redhat.com/errata/RHSA-2011-1392.html > > and a large number of files on the machine, including lots of files in */ > usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , > have a last-mod date of October 20th. So I assume that these are files > which were updated automatically by yum as a result of the patch that goes > with this advisory -- does that sound right? > > So a couple of questions that I could use some help with: > > 1) The last patch affecting httpd was released on October 20th, and the > earliest evidence I can find of the machine being hacked is a file dated > October 21st. This could be just a coincidence, but could it also suggest > that the patch on October 20th introduced a new exploit, which the attacker > then used to get in on October 21st? > (Another possibility: I think that when yum installs updates, it > doesn't actually restart httpd. So maybe even after the patch was > installed, my old httpd instance kept running and was still vulnerable? As > for why it got hacked the very next day, maybe the attacker looked at the > newly released patch and reverse-engineered it to figure out where the > vulnerabilities were, that the patch fixed?) > > 2) Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5 > weeks by default, it looks like any log entries related to how the attacker > would have gotten in on or before October 21st, are gone. (The secure* > logs do show multiple successful logins as "root" within the last 4 weeks, > mostly from IP addresses in Asia, but that's to be expected once the > machine was compromised -- it doesn't help track down how they originally > got in.) Anywhere else that the logs would contain useful data? > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centosDo you have SELinux enabled? If not, you might need to turn that on, as that would have prevented that exploit.
Les Mikesell
2012-Jan-02 01:01 UTC
[CentOS] an actual hacked machine, in a preserved state
On Sun, Jan 1, 2012 at 4:23 PM, Bennett Haselton <bennett at peacefire.org> wrote:> > So, following people's suggestions, the machine is disconnected and hooked > up to a KVM so I can still examine the files. ?I've found this file: > -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl > which appears to be a copy of this exploit script: > http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html > Note the last-mod date of October 21.Did you do an rpm -Va to see if any installed files were modified besides your own changes? Even better if you have an old backup that you can restore somewhere and run an rsync -avn against the old/new instances.> ?Anywhere else that the logs would contain useful data?/root/.bash_history might be interesting. Obviously this would be after the fact, but maybe they are trying to repeat the exploit with this machine as a base. -- Les Mikesell lesmikesell at gmail.com
On Sun, 2012-01-01 at 14:23 -0800, Bennett Haselton wrote:> (Sorry, third time -- last one, promise, just giving it a subject line!) > > OK, a second machine hosted at the same hosting company has also apparently > been hacked. Since 2 of out of 3 machines hosted at that company have now > been hacked, but this hasn't happened to any of the other 37 dedicated > servers that I've got hosted at other hosting companies (also CentOS, same > version or almost), this makes me wonder if there's a security breach at > this company, like if they store customers' passwords in a place that's > been hacked. (Of course it could also be that whatever attacker found an > exploit, was just scanning that company's address space for hackable > machines, and didn't happen to scan the address space of the other hosting > companies.) > > So, following people's suggestions, the machine is disconnected and hooked > up to a KVM so I can still examine the files. I've found this file: > -rw-r--r-- 1 root root 1358 Oct 21 17:40 /home/file.pl > which appears to be a copy of this exploit script: > http://archive.cert.uni-stuttgart.de/bugtraq/2006/11/msg00302.html > Note the last-mod date of October 21. > > No other files on the system were last modified on October 21st. However > there was a security advisory dated October 20th which affected httpd: > http://mailinglist-archive.com/centos-announce/2011-10/00035-CentOSannounce+CESA20111392+Moderate+CentOS+5+i386+httpd+Update > https://rhn.redhat.com/errata/RHSA-2011-1392.html > > and a large number of files on the machine, including lots of files in */ > usr/lib64/httpd/modules/* and */lib/modules/2.6.18-274.7.1.el5/kernel/* , > have a last-mod date of October 20th. So I assume that these are files > which were updated automatically by yum as a result of the patch that goes > with this advisory -- does that sound right? > > So a couple of questions that I could use some help with: > > 1) The last patch affecting httpd was released on October 20th, and the > earliest evidence I can find of the machine being hacked is a file dated > October 21st. This could be just a coincidence, but could it also suggest > that the patch on October 20th introduced a new exploit, which the attacker > then used to get in on October 21st? > (Another possibility: I think that when yum installs updates, it > doesn't actually restart httpd. So maybe even after the patch was > installed, my old httpd instance kept running and was still vulnerable? As > for why it got hacked the very next day, maybe the attacker looked at the > newly released patch and reverse-engineered it to figure out where the > vulnerabilities were, that the patch fixed?) > > 2) Since the */var/log/httpd/* and /var/log/secure* logs only go back 4-5 > weeks by default, it looks like any log entries related to how the attacker > would have gotten in on or before October 21st, are gone. (The secure* > logs do show multiple successful logins as "root" within the last 4 weeks, > mostly from IP addresses in Asia, but that's to be expected once the > machine was compromised -- it doesn't help track down how they originally > got in.) Anywhere else that the logs would contain useful data?---- the particular issue which was patched by this httpd (apache) update was to fix a problem with reverse proxy so the first question is did this server actually have a reverse proxy configured? My next thought is that since this particular hacker managed to get access to more than one of your machines, is it possible that there is a mechanism (ie a pre-shared public key) that would allow them access to the second server from the first server they managed to crack? The point being that this computer may not have been the one that they originally cracked and there may not be evidence of cracking on this computer. The script you identified would seem to be a script for attacking other systems and by the time it landed on your system, it was already broken into. There are some tools to identify a hackers access though they are often obscured by the hacker... last # reads /var/log/wtmp and provides a list of users, login date/time login duration, etc. Read the man page for last to get other options on its usage including the '-f' option to read older wtmp log files if needed. lastb # reads /var/log/btmp much as above but list 'failed' logins though this requires pro-active configuration and if you didn't do that, you probably will do that going forward. looking at /etc/passwd to see what users are on your system and then search their $HOME directories carefully for any evidence that their account was the first one compromised. Very often, a single user with a weak password has his account cracked and then a hacker can get a copy of /etc/shadow and brute force the root password. Consider that this type of activity is often done with 'hidden' files & directories. This hacker was apparently brazen enough to operate openly in /home so it's likely that he wasn't very concerned about his cracking being discovered. The most important thing to do at this point is to figure out HOW they got into your systems in the first place and discussions of SELinux and yum updates are not useful to that end. Yes, you should always update and always run SELinux but not useful in determining what actually happened. Make a list of all open ports on this system, check the directories, files, data from all daemons/applications that were exposed (Apache? PHP?, MySQL?, etc.) and be especially vigilant to any directories where user apache had write access. Again though, I am concerned that your first action on your first discovered hacked server was to wipe it out and of a notion that it's entirely possible that the actual cracking occurred on that system and this (and perhaps other servers) are simply free gifts to the hacker because they had pre-shared keys or the same root password. Craig -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
On Tuesday, January 03, 2012 03:24:34 PM Bennett Haselton wrote:> That there are 10^21 possible random 12-character alphanumeric passwords > -- making it secure against brute-forcing -- is a fact, not an opinion.> To date, *nobody* on this thread has ever responded when I said that > there are 10^21 possible such passwords and as such I don't think that > the password can be brute-forced in that way.Hmm, methinks you need to rethink the number. The number of truly random passwords available from a character set with N charaters and of a length L is N^L. (see https://en.wikipedia.org/wiki/Password_strength#Random_passwords ) If L=12, then: For: The numerals only: 10^12. The Uppercase alphabet only: 26^12 (9.5x10^16) Uppers and lowers: 52^12 (3.9x10^20) Numerals plus uppers and lowers: 62^12 (3.2x10^21) Base64: 64^12 (4.7x10^21) Full ASCII printables, minus space: 94^12 (4.76x10^23) This of course includes repeating characters. NIST recommends 80-bit entropy-equivalent passwords for secure systems; 12 characters chosen at random from the full 95 ASCII printable characters doesn't quite make that (at a calculated 78 bits or so). Having said all of that, please see and read http://security.stackexchange.com/questions/3887/is-using-a-public-key-for-logging-in-to-ssh-any-better-than-saving-a-password The critical thing to remember is that in key auth the authenticating key never leaves the client system, rather an encrypted 'nonce' is sent (the nonce is encrypted by the authenticating key), which only the server, possessing the matching key to the authenticating key, can successfully decrypt. This effectively gives full bit-strength for the whole key; 1024 bits of entropy, if you will, for 1024 bit keys. This would appear to require a 157 character random password chosen from all 95 ASCII printable characters to match, in terms of bit entropy. Obviously, the authenticating key's protection is paramount, and the passphrase is but one part of that. But that key never travels over the wire. In stark constrast, in password auth the password has to leave the system and traverse the connection, even if it's over an encrypted channel (it's hashed on the server end and compared to the server's stored hash, plus salt (always did like a little salt with my hash....!), not the client, right? After all, the client may not possess the algorithm used to generate the hash, but password auth still works.). This leaves a password vulnerable to a 'man in the middle' attack if a weakness is found in the negotiated stream cipher used in the channel. Even with a full man-in-the-middle 'sniff' going on, the key pair authentication is as strong as the crypto used to generate the key pairs, which can be quite a bit stronger than the stream cipher. (56 bit DES, for instance, can be directly brute-forced in 'reasonable' amounts of time now). Pfft, if I understand the theory correctly (and I always reserve the right to be wrong!), you could, in theory, securely authenticate with keys over a nonencrypted session with a long enough nonce plaintext and the nonce's ciphertext traversing the wire in the clear. Of course, you encrypt the actual session for other reasons (primarily to prevent connection hijacking, since that defeats the Authorization and Accountability portions of triple-A), but the session encryption and the key auth are two distinct processes. I've been reading this thread with some amusement; while brute-forcing is an interesting problem it is far from the most important such problem, and simple measures, like not allowing random programs to listen on just any port (or for that matter make random outbound connections to just any port!) is just basic stuff. Machines are hacked into almost routinely these days without any initial knowledge of the authentication method or credentials, but by security vulnerabilities in the system as set up by the admin. SELinux and iptables are two of the most useful technologies for helping combat this problem, but they (and any other tools to improve security) must be properly configured to do any good at all. And there are many many more Best Practices which have been (and continue to be) refined by many years of experience out in the field. Reality is reality, whether the individual admin likes it or not, sorry.
On Sunday, January 01, 2012 06:27:32 PM Bennett Haselton wrote:> (I have already practically worn out my keyboard explaining the math behind > why I think a 12-character alphanumeric password is secure enough :) )Also see: https://lwn.net/Articles/369703/
On Tuesday, January 03, 2012 06:12:10 PM Bennett Haselton wrote:> I'm not sure what their logic is for recommending 80. But 72 bits > already means that any attack is so improbable that you'd *literally* > have to be more worried about the sun going supernova.I'd be more worried about Eta Carinae than our sun, as with it's mass it's likely to be a GRB. The probability of it happening in our lifetime is quite low; yet, if it does happen in our lifetime (actually, if it happened about 7,500 years ago!) it will be an extinction event. So we watch it over time (and we have plates of it going back into the late 1800's). Likewise for security; the gaussian curve does have outliers, after all, and while it is highly unlikely for a brute-force attack to actually come up with anything against a single server it is still possible, partially due to the number of servers out there coupled with the sheer number of brute-forcers running. The odds are not 1 out of 4.7x10^21; they're much better than that since there isn't just a single host attempting the attack. If I have a botnet of 10,000,000 infected PC's available to attack 100,000,000 servers (close to the number), what are the odds of one of those attacks succeeding? (the fact is that it has happened already; see my excerpted 'in the wild' brute-forcer dictionary below).> > The critical thing to remember is that in key auth the authenticating key never leaves the client system,...> Actually, the top answer at that link appears to say that the server > sends the nonce to the client, and only the client can successfully > decrypt it. (Is that what you meant?)That's session setup, not authentication. The server has to auth to the client first for session setup, but then client auth is performed. But either way the actual client authenticating key never traverses the wire and is unsniffable.> Furthermore, when you're dealing with probabilities that ridiculously > small, they're overwhelmed by the probability that an attack will be > found against the actual algorithm (which I think is your point about > possible weaknesses in the stream cipher).This has happened; read some SANS archives. There have been and are exploits in the wild against SSH and SSL; even caused OpenBSD to have to back down from it's claim of never having a remotely exploitable root attack.> However, *then* you have to take into account the fact that, similarly, > the odds of a given machine being compromised by a man-in-the-middle > attack combined with cryptanalysis of the stream cipher, is *also* > overwhelmed by the probability of a break-in via an exploit in the > software it's running. I mean, do you think I'm incorrect about that?What you're missing is that low probability is not a preventer of an actual attack succeeding; people do win the lottery even with the odds stacked against them.> Of the compromised machines on the Internet, what proportion do you > think were hacked via MITM-and-advanced-crypto, compared to exploits in > the services?I don't have sufficient data to speculate. SANS or CERT may have that information.> and if I hadn't stood my ground about that, > the discussion never would have gotten around to SELinux, which, if it > works in the manner described, may actually help.The archives of this list already had the information about SELinux contained in this thread. Not to mention the clear and easily accessible documentation from the upstream vendor linked to from the CentOS website.> The problem with such "basic stuff" is that in any field, if there's no > way to directly test whether something has the desired effect or not, it > can become part of accepted "common sense" even if it's ineffective.Direct testing of both SELinux and iptables effectiveness is doable, and is done routinely by pen-testers. EL6 has the tools necessary to instrument and control both, and by adding third-party repositories (in particular there is a security repo out there> If your server does get broken into > and a customer sues you for compromising their data, and they find that > you used passwords instead of keys for example, they can hire an > "expert" to say that was a foolish choice that put the customer's data > at risk.There is this concept called due diligence. If an admin ignores known industry standards and then gets compromised because of that, then that admin is negligent. Thus, risk analysis and management is done to weigh the costs of the security against the costs of exploit; or, to put in the words of a security consultant we had here (the project is, unfortunately, under NDA, so I can't drop the name of that consultant) "You will be or are compromised now; you must think and operate that way to mitigate your risks." Regardless of the security you think you have, you will be compromised at some point. The due diligence is being aware of that and being diligent enough so that a server won't have been compromised for two months or longer before you find it. Secure passwords by themselves are not enough. Staying patched, by itself, is not enough. IPtables and other network firewalls by themselves are not enough. SELinux and other access controls (such as TOMOYO and others) by themselves are not enough. And none of the technologies are 'set it and forget it' technologies. You need awareness, multiple layers, and diligent monitoring. And I would change your sentence slightly; it's not a matter of 'if' your server is going to get broken into, it's 'when.'> Case in point: in the *entire history of the Internet*, do you think > there's been a single attack that worked because squid was allowed to > listen on a non-standard port, that would have been blocked if squid had > been forced to listen on a standard port?I'm sure there has been, whether it's documented or not. I'm not aware of a comprehensive study into all the possible avenues of access; but the various STIG's exist for valid reasons. (see Johnny's post to those standards and best practices). Best practices are things learned in the field by looking to see what is being used and has been used to break in; they're not just made up out of whole cloth. And if the theory says it shouldn't be possible, but it reality it has happened, then the theory has issues; emipirical data trumps all. It was impossible for Titanic to sink, but it did anyway. But it's not just squid; SELinux implements mandatory access controls; this means that the policy is default deny for all services, all files, and all ports. It has nothing to do with squid only; it's about a consistently implemented security policy where programs only get the access that they have to have to do the job they need to do. Simply enforcing that rule consistently can help eliminate backdoor processes (which can easily be implemented over encrypted GRE tunnels through a loopback device, at least the couple I've seen were). You don't seem to have much experience with dealing with today's metasploit-driven multilayer attacks.> What's unique about security advice is that some of it can be rejected > just on logical grounds, to move the discussion on to something more > likely to help.You need to learn more about what you speak before you say such unfounded speculative things, really. The attacks mentioned are real; we're not making this stuff up off the top of our heads, Bennett. Nor am I making up the contents of a brute-forcer dictionary I retrieved, about three years ago, from a compromised machine here which contains the following passwords in it: ... root:LdP9cdON88yW root:u2x2bz root:6e51R12B3Wr0 root:nb0M4uHbI6M root:c3qLzdl2ojFB root:LX5ktj root:34KQ root:8kLKwwpPD root:Bl95X1nU root:3zSlRG73r17 root:fDb8 root:cAeM1KurR root:MXf3RX7 root:4jpk root:j00U3bG1VuA root:HYQ9jbWbgjz3 root:Ex4yI8 root:k9M0AQUVS5D root:0U9mW4Wh root:2HhF19 root:EmGKf4 root:8NI877k8d5v root:K539vxaBR root:5gvksF8g55b root:TO553p9E root:7LX66rL7yx1F root:uOU8k03cK2P root:l9g7QmC9ev0 root:E8Ab root:98WZ4C55 root:kIpfB0Pr3fe2 ... How do you suppose those passwords (along with 68,887 others) got in that dictionary? Seriously: the passwords are there and they didn't just appear out of thin air. The people running those passwords thought they were secure enough, too, I'm sure. The slow brute-forcers are at work, and are spreading. This is a fact; and some of those passwords are 12-character alphanumerics (some with punctuation symbols) with 72-bits of entropy, yet there they are, and not just one of them, either. Facts are stubborn things.
[Distilling to the core matter; everything else is peripheral.] On Jan 4, 2012, at 2:58 PM, Bennett Haselton wrote:> To be absolutely clear: Do you, personally, believe there is more > than a > 1 in a million chance that the attacker who got into my machine, got > it > by brute-forcing the password? As opposed to, say, using an > underground > exploit?Here's how I see it breaking down: 1.) Attacker uses apache remote exploit (or other means) to obtain your /etc/shadow file (not a remote shell, just GET the file without that fact being logged); 2.) Attacker runs cloud-based (and/or CUDA accelerated) brute-forcer on 10,000,000 machines against your /etc/shadow file without your knowledge; 3.) Some time passes; 4.) Attacker obtains your password using distributed brute forcing of the hash in the window of time prior to you resetting it; 5.) Attacker logs in since you allow password login. You're pwned by a non-login brute-force attack. In contrast, with ssh keys and no password logins allowed: 1.) Attacker obtains /etc/shadow and cracks your password after some time; 2.) Attacker additionally obtains /root/.ssh/* 3.) Attacker now has your public key. Good for them; public keys don't have to be kept secure since it is vastly more difficult to reverse known plaintext, known ciphertext, and the public key into a working private key than it is to brute-force the /etc/shadow hash (part of the difficulty is getting all three required components to successfully reverse your private key; the other part boils down to factoring and hash brute-forcing); 4.) Attacker also has root's public and private keys, if there is a pair in root's ~/.ssh, which may or may not help them. If there's a passphrase on the private key, it's quite difficult to obtain that from the key; 5.) Attacker can't leverage either your public key or root's key pair (or the machine key; even if they can leverage that to do MitM (which they can and likely will) that doesn't help them obtain your private key for authentication; 6.) Attacker still can't get in because you don't allow password login, even though attacker has root's password. This only requires an apache httpd exploit that allows reading of any file; no files have to be modified and no shells have to be acquired through any exploits. Those make it faster, for sure; but even then the attacker is going to acquire your /etc/shadow as one of the first things they do; the next thing they're going to do is install a rootkit with a backdoor password. Brute-forcing by hash-cracking, not by attempting to login over ssh, is what I'm talking about. This is what I mean when I say 'multilayer metasploit-driven attacks.' The weakest link is the security of /etc/shadow on the server for password auth (unless you use a different auth method on your server, like LDAP or other, but that just adds a layer, making the attacker work harder to get that all-import password). Key based auth is superior, since the attacker reading any file on your server cannot compromise the security. Kerberos is better still. Now, the weakest link for key auth is the private key itself. But it's better protected than any password is (if someone can swipe your private key off of your workstation you have bigger problems, and they will have your /etc/shadow for your workstation, and probably a backdoor.....). The passphrase is also better protected than the typical MD5 hash password, too. It is the consensus of the security community that key-based authentication with strong private key passphrases is better than any password-only authentication, and that consensus is based on facts derived from evidence of actual break-ins. While login-based brute- forcing of a password that is long-enough (based upon sshd/login/ hashing speed) is impractical for passwords of sufficient strength, login-based brute forcing is not the 'state of the art' in brute- forcing of passwords. Key-based auth with a passphrase is still not the ultimate, but it is better than only a password, regardless of the strength of that password. If your password was brute-forced, it really doesn't matter how the attacker did it; you're pwned either way. It is a safe assumption that there are httpd exploits in the wild, that are not known by the apache project, that specifically attempt to grab /etc/shadow and send to the attacker. It's also a safe assumption that the attacker will have sufficient horsepower to crack your password from /etc/shadow in a 'reasonable' timeframe for an MD5 hash. So you don't allow password authentication and you're not vulnerable to a remote /etc/shadow brute-forcing attack regardless of how much horsepower the attacker can throw your way, and regardless of how the attacker got your /etc/shadow (you could even post it publicly and it wouldn't help them any!).
On Wednesday, January 04, 2012 08:47:47 PM Bennett Haselton wrote:> Well yes, on average, password-authentication is going to be worse > because it includes people in the sample who are using passwords like > "Patricia". Did they compare the break-in rate for systems with 12-char > passwords vs. systems with keys?And this is where the rubber meets the road. Keys are uniformly secure (as long as physical access to the private key isn't available to the attacker), passwords are not. It is a best practice to not run password auth on a public facing server running ssh on port 22. Simple as that. Since this is such a basic best practice, it will get mentioned anytime anyone mentions using a password to log in remotely over ssh as root; the other concerns and possible exploits are more advanced than this. Addressing that portion of this thread, it's been my experience that once an attacker gains root on your server you have a very difficult job on your hands determining how they got in; specialized forensics tools that analyze more than just logs can be required to adequately find this; that is, this is a job for a forensics specialist. Now, anyone (yes, anyone) can become a forensics specialist, and I encourage every admin to at least know enough about forensics to at least be able to take a forensics-quality image of a disk and do some simple forensics-quality read-only analysis (simply mounting, even as read-only, an ext3/4 filesystem breaks full forensics, for instance). But when it comes to analyzing today's advanced persistent threats and breakins related to them, you should at least read after experts in this field like Mandiant's Kevin Mandia (there's a slashdot story about him and exactly this sort of thing; see http://it.slashdot.org/story/12/01/04/0630203/cleaning-up-the-mess-after-a-major-hack-attack for details). He's a nice guy, too. I would suspect that no one on this list would be able or willing to provide a full analysis on-list, perhaps privately, though, and/or for a fee. In conclusion, as I am done with this branch of this thread, I'd recommend you read http://www.securelist.com/en/blog/625/The_Mystery_of_Duqu_Part_Six_The_Command_and_Control_servers
On Thursday, January 05, 2012 02:25:50 PM Ljubomir Ljubojevic wrote:> What is sentiment about having dedicated box with only ssh, and then use > that one to raise ssh tunnels to inside systems? So there is no exploits > to be used, denyhosts in affect?Without being too specific, I already do this sort of thing, but with two 'bastion' hosts in a failover/load-balanced scenario on physical server hardware. I use a combination of firewalling to keep incoming on port 22 out of the other hosts, using nat rules, cisco incoming and outgoing acls on the multiple routers between the servers and the 'outside' world, iptables, and other means. In particular, Cisco's NAT 'extendable' feature enables interesting layer 4 switching possibilities. I'm not going to say that it's perfectly secure and won't ever allow a penetration, but it seems to be doing a pretty good job at the moment. Improvements I could make would include: 1.) Boot and run the bastion hosts from customized LiveCD or LiveDVD on real DVD-ROM read-only drives with no persistent storage (updating the LiveCD/DVD image periodically with updates and with additional authentication users/data as needed; DVD+RW works very well for this as long as the boot drive is a DVD-ROM and not an RW drive!); 2.) Scheduled rolling reboots of the bastion hosts using a physical power timer (rebooting each machine at a separate time once every 24 hours during hours remote use wouldn't happen (best time is during local lunchtime, actually); the boxes are set to power on automatically upon power restoration after loss); 3.) Port knocking and similar techniques for the bastion hosts in addition to the layered ssh solution in place (I'm using NX, which logins in as the nx user via keys first, then authenticates the user, either with keys or with a password); 4.) Packetfence or similar snort IDS box sitting on the ethernet VLANs of these boxes with custom rules designed to detect intrusions in progress and dynamically add acls to the border routers upon detection (this one will take a while); I'm still thinking of unusual ways of securing; I've looked at tarpits and honeypots, too, and have really enjoyed some of the more arcane advice I've seen on this list in the past. I still want the device used to remotely fry the computer in the movie 'Electric Dreams' personally..... :-)
On Jan 5, 2012, at 11:13 PM, email builder wrote:> I don't mean to thread-hijack, but I'm curious, if apache runs as its > own non-root user and /etc/shadow is root-owned and 0400, then > how could any exploit of software not running as root ever have > access to that file??To listen on the default port 80, httpd requires running as root. According to the Apache httpd site, the main httpd process continues running as root, with the child processes dropping privileges to the other user. See: http://httpd.apache.org/docs/2.1/invoking.html