At 01:59 PM 6/29/98 +0000, Kokoro Security Administrator wrote:>Hello everyone - > >I am looking for the name of a piece of hardware, and don''t know what it >is called. I am told that there exists such a thing (a switch? a router? >a special hub?) that will only send me traffic that is destined for me.simple definitions: --router: looks at a layer 3 address (such as an IP address) and forwards traffic between two or more interfaces based upon that address; also handles other duties such as media conversion; will change the datagram to reflect routing. Routers segment broadcast domains. --bridge: looks at layer 2 address (such as ethernet MAC) and forwards traffic between two interfaces based upon that address; sometimes handles other duties such as media conversion; often sends datagram through transparently. Bridges segment collision domains. --switch: multi-interface bridge; typically does not do media conversion; often sends datagrams through transparently>In other words, I am one of 100 users on a LAN, say, and all traffic on >this LAN gets routed through this >hub-like-thing-whose-name-I-am-searching. This thing knows all the >ethernet interfaces that is connected to it, and it only sends to >interface x the packets that are destined for that interface or are >broadcast.There are a couple of special cases you should be aware of, particularly unknown unicast. If a switch sees a destination mac that it has not seen before, it typically floods that packet on all (other) interfaces -- thus you may see traffic not destined for you, until (and if) the switch''s bridge table picks up the new mac. (Thus switches are usually configured to learn new macs over time.) Important note: a switch is NOT a security device! It is engineered to improve throughput on a network by reducing collision domains. Yes, you will only see traffic destined for you, usually. However, devices that are not engineered to be secure usually aren''t! For example, bridge tables are finite. If the switch is configured in learning mode, then the black hat need only flood the bridge table with new macs -- the old, private macs are deleted -- and now all traffic is again visible. Put in a switch to improve bandwidth, not out of a sense of security.> >Is this a switch? Does it even exist? > >Thanks for any replies to help a novice - > >Richard Hakim--woody -- Robert Wooddell Weaver email: woody.weaver@wiltelnsi.com Network Engineer voice: 510.358.3972 Williams Communication Solutions pager: 510.702.4334
Christopher Hicks
1998-Jun-30 06:00 UTC
[linux-security] Re: A switch? A router? What am I looking for??
On Mon, 29 Jun 1998, Woody Weaver wrote:> Put in a switch to improve bandwidth, not out of a sense of security.Security in depth is good. A switch''s primary purpose is and should be improved bandwidth. But it also helps security. MAC floods can be detected. That''s enough to dissuade some threats. The attraction of packet sniffing attacks is the difficulty of detection. But the principle of security in depth is the real issue I''m trying to address. It is often missed. Just because a switch or a firewall or a lock on a file cabinet isn''t perfect does not mean that it shouldn''t be part of a complete security plan. People lock their office door then leave root logged in. People buy a firewall and then run their systems without patches or proper passwords. Bad. Bad. There are few either-or choices in security that shouldn''t be answered "both". </chris> -- If trees could scream, would we be so cavalier about cutting them down? We might, if they screamed all the time, for no good reason. ~*-,._.,-*~''`^`''~*-,._.,-*~''`^`''~*-,._.,-*~''`^`''~*-,._.,-*~''`^`''~*-,._.,-*~ From mail@mail.redhat.com Mon Jun 1 08:25:24 1998 Received: (qmail 10937 invoked from network); 1 Jun 1998 06:29:12 -0000 Received: from 3dyn105.delft.casema.net (HELO rosie.BitWizard.nl) (@195.96.104.105) by mail2.redhat.com with SMTP; 1 Jun 1998 06:29:12 -0000 Received: from cave.BitWizard.nl (wolff@cave.BitWizard.nl [130.161.127.248]) by rosie.BitWizard.nl (8.8.5/8.8.5) with ESMTP id IAA03359 for <linux-security@redhat.com>; Mon, 1 Jun 1998 08:25:24 +0200 Received: (from wolff@localhost) by cave.BitWizard.nl (8.8.5/8.7.3) id IAA00699 for linux-security@redhat.com; Mon, 1 Jun 1998 08:29:24 +0200 Received: from dutepp0.et.tudelft.nl by rosie.BitWizard.nl (fetchmail-4.2.9 POP3 run by wolff) Approved: R.E.Wolff@BitWizard.nl for <wolff@localhost> (single-drop); Mon Jun 1 08:09:16 1998 Received: from ferryman.ocn.nl (root@ferryman.ocn.nl [193.78.195.1]) by dutepp0.et.tudelft.nl (8.8.8/8.8.8/CARDIT) with SMTP id FAA11356 for <wolff@dutepp0.et.tudelft.nl>; Mon, 1 Jun 1998 05:10:09 +0200 (MET DST) Received: from mail2.redhat.com (mail2.redhat.com [199.183.24.247]) by ferryman.ocn.nl (8.6.13/8.6.9) with SMTP id EAA24439 for <r.e.wolff@BitWizard.nl>; Mon, 1 Jun 1998 04:59:13 +0200 Received: (qmail 15261 invoked by uid 501); 1 Jun 1998 03:10:02 -0000 Received: (qmail 15248 invoked from network); 1 Jun 1998 03:09:59 -0000 Received: from brandonppp.magnet.fsu.edu (root@146.201.251.183) by mail2.redhat.com with SMTP; 1 Jun 1998 03:09:59 -0000 Received: from brandonppp (root@localhost [127.0.0.1]) by brandonppp.magnet.fsu.edu (8.8.7/8.8.7) with SMTP id XAA01562 for <linux-security@redhat.com>; Sun, 31 May 1998 23:19:22 -0500 From: "Brandon K. Matthews" <bmatt@devils.eng.fsu.edu> Subject: "Flavors of Security Through Obscurity" Date: Sat, 30 May 1998 11:44:39 -0500 X-Mailer: KMail [version 0.6.5] Content-Type: text/plain MIME-Version: 1.0 Message-Id: <98053011532200.04000@brandonppp> Content-Transfer-Encoding: 8bit To: linux-security@redhat.com X-moderate: yes This was posted not too long ago on sci.crypt... Enjoy... I think the most relevant information is near the top, but it''s all quite good... :-) -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- There is no intrinsic difference between algorithm and data, the same information can be viewed as data in one context and as algorithm in another. Why then do so many people claim that encryption algorithms should be made public and only the key should be kept secret? (This is the famous and derisive mantra about "security through obscurity".) There are several answers: a) Some people, with little understanding about computer technology, try to keep secret what their programs do, even though the programs themselves are public. A program *is* a representation of the algorithm, even though it happens to be more difficult for humans to read than, say, an detailed description in English. Actually it is a very good idea to keep secret the algorithm (in all its representations), as long as you can afford to do so. That is why major governments do exactly that. b) One can memorize a key and keep it secret in one''s head. Normally, encryption algorithms are too complicated to be memorized. Therefore it is easier to keep secret a key than an algorithm. c) Most people and organizations do not have sufficient expertise to create a new and good encryption algorithm and then try to keep it secret. A bad encryption algorithm, in this context, is an algorithm that can be broken by a sophisticated attacker even without knowledge about the algorithm itself. As you see, the reasons are of a practical nature, and are not derived from any fundamentals in cryptology. If we could find a practical way to keep secret both the key (that is the data the encryption method operates on) and also the method itself (or at least part of the method), then security would be greatly enhanced because the attacker would have less knowledge to work with. I believe there are several ways to overcome these practical difficulties: a) Machine generated secret ciphers. Today there are only a few encryption algorithms that are generally accepted as good. But suppose there existed a generator program which could construct a new encryption algorithm depending on some random input. Actually, the generator program would produce another program which would then be used as the encryption software. In some important cases, it is feasible to keep secret the resulting program: International organizations could distribute disks containing the program using trusted persons, the program could be loaded in centralized servers which actually operate from within a safe, or maybe the program (in encrypted form) would be run only from a floppy disk which would be handled with the same care as the key to a safe. We all know that absolute security is impossible. What I am suggesting here is that in many cases this system of security is better than one using a standardized and public algorithm which attracts a lot cryptanalytic work and may be broken in the near future or may have already been broken in secret. b) Intrinsically secret ciphers. Extend secrecy to parts of the encryption method. In his book, Schneier very briefly describes a variant of DES where the Sboxes (which most people would consider as part of the algorithm) are variable and depend on the key. Another very interesting possibility would have the key express the encryption method. In other words consider the key as the program, and the cipher simply as an interpreter, that follows the key''s instructions to scramble the plaintext or unscramble the ciphertext. This would call for large keys, but not larger than keys used in public key encryption. c) "Variable" ciphers. The idea here is to implement a cipher that incorporates a huge number of different encryption functions. The objective is to overwhelm the analytic capability of an attacker. (At the end of this post you will find the outline of a proof about why a cipher of this type is intrinsically more secure.) My GodSave encryption algorithm belongs to this class of "variable" ciphers. It extensively uses data of the key to change the control flow of the program execution. In other words, whereas most algorithms just operate on the key, GodSave uses information in the key to decide how to operate. Even better: It is constructed in such a way that large sections of its code can be modified by a programmer without affecting the security of the algorithm, offering some of the advantages described under a) above. Here is the definition of another cipher of this type (let us call it "heavy DES"): Start by randomly defining 16K DES keys; you need less the 1 MB space in your hard disk to save them. Suppose that this large key set is public (either by choice or because an attacker gained access to your computer and stole it). So now you have a large set of DES ciphers with *known* keys; the effort to break any one of them is 1. Now define a secret key of 140 bits. Use 14 bits at a time to index one of the 16K DES functions. Encrypt a 64 bit block by sequentially chaining the 10 indexed DES functions. DES is not a group, therefore each instance of the 140 bits long key results in a different mapping of the plaintext space into the ciphertext space. If we choose from 2^N DES functions and chain P of them together (in the example above N=14 and P=10) then there are 2^(N*P) different instances. Already with 140 bits of entropy, a brute force attack is out of the question no matter how many hardware coded DES machines you have. Suppose you have perfect cryptanalytic knowledge of DES - trapdoor and all - even then, can you see a way to attack this variable version? Finally, let me try to demonstrate why a "variable" cipher is more difficult to break: Take two ciphers A and B with keys of N bits. These ciphers must be independent in the sense that physically breaking one does not help in breaking the other. (By "physical break" I mean the work necessary to recover the key when the cipher is already cryptanalyzed; "logical break" would be the work necessary to successfully cryptanalize a cipher"). Let us suppose that these ciphers are not perfect; and therefore there exists a method (known or unknown) to physically break them that is more efficient then brute force, i.e. the trying of all possible keys. (Observe that no publicly known cipher with a key that is used repeatedly has been proved to be perfect in this sense.) For ciphers A and B there exists then a method to break them with as few as 2^(N*k) operations where k<1 (2^N corresponds to brute forcing them). If you increase the key length by 1 bit, then you would need 2^((N+1)*k)=2^N * 2^k operations to break A or B. But if you create a new cipher C with a key of N+1 bits where the last bit is used to choose either A or B as the working cipher with a key of N bits, then you must break A, and with 50% probability B also, with an effort comparable to 2^(N*k)+0.5*2^(N*k)=3/2 * 2^(N*k). Therefore you need more effort to break C with a key of N+1 bits, than either A or B with a key of N+1 bits, as long as k is less then ln(3/2)/ln(2) = 0.58. If instead of two ciphers, you started with 2^M different ciphers, then the results are more dramatic. The effort required for breaking the resulting cipher is now 2^(N*K-1)*(2^M+1) and it will be stronger as long as k < 1/M*(ln(2^M+1)/ln(2) -1) or for large M: k < 1 - 1/M. This works like a security amplifier: if you can construct 1024 independent ciphers then by this method you can produce a cipher that has a 10 bit longer key and is provably 512 times more secure than any one of them (in the sense that an attacker must invest 512 times more effort to break it). -=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=- -- ''L8r -BKM ----------------------------------------------------------------------------- Do not meddle in the affairs of Dragons | Brandon Matthews - SysAdmin For you are crunchy and good with mustard. | <bmatt@devils.eng.fsu.edu> -- Anonymous | (850) 668-2677 ----------------------------------------------------------------------------- finger bmatt@devils.eng.fsu.edu for PGP public key
At 02:00 AM 6/30/98 -0400, Christopher Hicks wrote:>On Mon, 29 Jun 1998, Woody Weaver wrote: >> Put in a switch to improve bandwidth, not out of a sense of security. > >Security in depth is good. A switch''s primary purpose is and should be >improved bandwidth. But it also helps security. MAC floods can be >detected. That''s enough to dissuade some threats. The attraction of >packet sniffing attacks is the difficulty of detection.I agree with Chris'' statement, but I agree with mine as well.>From the original authors comment, I deduced (incorrectly perhaps) that hewas trying to find out what a switch was, and if it would be a security device. For a novice, imho, a switch is only a collision-domain manipulator, *not* a secure device. In particular, I would be concerned that a novice would drop a switch in place and believe that he was safe against passive sniffing attacks. Far better is to make as a criterion of the security policy which machines are able to see traffic from other machines, and choose an appropriate appliance or procedure to enforce that policy.>But the principle of security in depth is the real issue I''m trying to >address. It is often missed. Just because a switch or a firewall or a >lock on a file cabinet isn''t perfect does not mean that it shouldn''t be >part of a complete security plan. People lock their office door then >leave root logged in. People buy a firewall and then run their systems >without patches or proper passwords. Bad. Bad. There are few either-or >choices in security that shouldn''t be answered "both".Please note the first sentence does not correspond to the examples listed. All the examples listed correspond to violations of (an implied) security policy, *not* missing security in depth. I *like* security in depth. When I have the luxury, I have one vendors'' firewall on the perimeter, another vendor''s firewall for the interior, and specify file encryptors and end-to-end encryption on administrative traffic. Belts and suspenders are good things, when money/convenience is outweighed by the risk/value computation. What I''d like to point out (that is often missed) is that security implementations must follow from the security policy -- not vice versa -- and that screwdrivers are not chisels even though they share some features. If the security policy states that machine A is not to see machine B''s traffic, then the answer is not "lets buy a switch" -- its a little deeper than that, and it might involve buying a secure switch, but then it also involves a good deal of thought about configuration of the switch, and so on.> ></chris>--woody -- Robert Wooddell Weaver email: woody.weaver@wiltelnsi.com Network Engineer voice: 510.358.3972 Williams Communication Solutions pager: 510.702.4334