Hi @all! I have two questions: - First, am I right that the chance of getting the same 32-bit rolling checksum is 1/2^16 and to get the same 128-bit MD5 Hash is 1/2^127? - Finally I want two know if it is possible to change an amount of blocks manually? e.g. I made a 100 MB file with "dd if=/dev/zero of=/home/test.xyz bs=1M count=100" and know I want to change, lets say, 10 blocks of this file. Is it possible? The blocksize above (bs=1M) has nothing to do with the blocksize rsync uses, right?! Thanks a lot. Greetings David _______________________________________________________________________ NUR NOCH BIS 31.01.! WEB.DE FreeDSL - Telefonanschluss + DSL f?r nur 16,37 EURO/mtl.!* http://dsl.web.de/?ac=OM.AD.AD008K13805B7069a
On Thu, 2009-01-22 at 10:43 +0100, David de Lama wrote:> - First, am I right that the chance of getting the same 32-bit rolling > checksum is 1/2^16 and to get the same 128-bit MD5 Hash is 1/2^127?You might know something I don't, but I would expect the collision probability to be 1/2^32 for 32 bits and 1/2^128 for 128 bits. These values are for two given blocks; to find the probability of at least one collision in a collection of blocks (e.g., a file), you would have to account for all pairs. The values further assume that the checksums of all the blocks under consideration are independent and uniformly random. Of course, one can craft an input file that causes a collision.> - Finally I want two know if it is possible to change an amount of > blocks manually? > e.g. I made a 100 MB file with "dd if=/dev/zero of=/home/test.xyz > bs=1M count=100" and know I want to change, lets say, 10 blocks of > this file. Is it possible?I guess you're performing some kind of test of the delta-transfer algorithm? The block size for each file defaults to approximately the square root of its size. You can find out the exact block size rsync is using for a file by passing -vvv (--debug=deltasum2 in rsync 3.1.*) and looking for "blength=" in the output, or you can specify a block size to use for all files with the --block-size option. Then, just overwrite the desired areas of the file.> The blocksize above (bs=1M) has nothing to do with the blocksize > rsync uses, right?!Correct, although setting the dd block size equal to the rsync block size and using the "seek" option does give you a convenient way to overwrite individual rsync blocks. -- Matt
On 22-Jan-2009, at 02:43, David de Lama wrote:> Hi @all! > > I have two questions: > - First, am I right that the chance of getting the same 32-bit > rolling checksum is 1/2^16Depends on the algorithm. Most 32bit algorithms are not really 1: (2^16)-1> and to get the same 128-bit MD5 Hash is 1/2^127?The chances of two files accidentally generating the same MD5 hash are about 1:2^100, or about 1 in 12,676,506,000,000,000,000,000,000,000. Why it's not 1 in (2^127)-1 is complicated, and might be part of the reason for MD5's exploitability. Enough said that MD5 does not use every possible potential hash. Put another way, if you generate a new md5 hash ever *nanosecond*, it will take up to 40,170,281,500,000 YEARS to generate a collision (or about 3000 times long than the universe has existed). To have a 50% chance of collision is considerably more likely (about 600 years if you generate one MD5 hash every nanosecond, or nearly 600,000,000,000 years (c. 50 times longer than the universe has existed) at a more reasonable rate of 1 hash per second). If your concern with MD5 is accidental collisions, then MD5 is perfectly fine; you're not going to get hash collisions. If your concern is security, then MD5 is not acceptable because it has known vulnerabilities. -- Advance and attack! Attack and destroy! Destroy and rejoice!
Matthias Schniedermeyer
2009-Jan-23 08:47 UTC
Chance of equal checksum and changing blocks
On 22.01.2009 10:43, David de Lama wrote:> Hi @all! > > I have two questions: > - First, am I right that the chance of getting the same 32-bit rolling checksum is 1/2^16 and to get the same 128-bit MD5 Hash is 1/2^127?No. The chance of "accidental" collision with MD5 is: 1/2^64 The "other half" of bits is against brute-forcing. Bis denn -- Real Programmers consider "what you see is what you get" to be just as bad a concept in Text Editors as it is in women. No, the Real Programmer wants a "you asked for it, you got it" text editor -- complicated, cryptic, powerful, unforgiving, dangerous.
David, please CC rsync@lists.samba.org in your replies so that others can help you and your messages are archived for others' future benefit. On Fri, 2009-01-23 at 09:02 +0100, David de Lama wrote:> >> - Finally I want two know if it is possible to change an amount of > >> blocks manually? > >> e.g. I made a 100 MB file with "dd if=/dev/zero of=/home/test.xyz > >> bs=1M count=100" and know I want to change, lets say, 10 blocks of > >> this file. Is it possible? > > >I guess you're performing some kind of test of the delta-transfer > >algorithm? > > > >The block size for each file defaults to approximately the square root > >of its size. You can find out the exact block size rsync is using for a > >file by passing -vvv (--debug=deltasum2 in rsync 3.1.*) and looking for > >"blength=" in the output, or you can specify a block size to use for all > >files with the --block-size option. Then, just overwrite the desired > >areas of the file.> My next question is, how to overwrite the desired areas in a file?Here's an example. Suppose the block size is 900 bytes (i.e., you either specified --block-size=900 or saw blength=900 in the output). Then you could use the following command to overwrite blocks 40 through 45 of the file (counting from 0) with zeros: dd bs=900 if=/dev/zero of=/home/test.xyz seek=40 count=6 To overwrite with random data, change the if=/dev/zero to if=/dev/urandom . -- Matt
> >Here's an example. Suppose the block size is 900 bytes (i.e., you >either specified --block-size=900 or saw blength=900 in the output). >Then you could use the following command to overwrite blocks 40 through >45 of the file (counting from 0) with zeros: > >dd bs=900 if=/dev/zero of=/home/test.xyz seek=40 count=6 > >To overwrite with random data, change the if=/dev/zero to >if=/dev/urandom . > >-- >MattHi Matt and thanks for the fast answer! I testet the command to overwrite the desired amount of blocks but it didn't work! I created a 100MB file with: dd if=/dev/zero of=/home/test/test.xyz bs=1M count=100 Then I wanted to overwrite 25 blocks: dd bs=1M if=/dev/urandom of=/home/test.xyz seek=50 count=25 But when I look at the file, its size is now 76.8MB! So all the Blocks after block 75 are deleted! :( Need help, please! Thanks, David __________________________________________________________________ Deutschlands gr??te Online-Videothek schenkt Ihnen 12.000 Videos!* http://entertainment.web.de/de/entertainment/maxdome/index.html
On Tue 27 Jan 2009, David de Lama wrote:> Then I wanted to overwrite 25 blocks: > dd bs=1M if=/dev/urandom of=/home/test.xyz seek=50 count=25 > > But when I look at the file, its size is now 76.8MB! So all the Blocks after block 75 are deleted! :( > Need help, please!Try the dd manpage, which mentions: conv=CONVS convert the file as per the comma separated symbol list ... Each CONV symbol may be: ... notrunc do not truncate the output file Hence try adding conv=notrunc Paul Slootman
-----Urspr?ngliche Nachricht----- Von: "Paul Slootman" <paul+rsync@wurtel.net> Gesendet: 27.01.09 11:25:31 An: rsync@lists.samba.org Betreff: Re: Chance of equal checksum and changing blocks On Tue 27 Jan 2009, David de Lama wrote:> Then I wanted to overwrite 25 blocks: > dd bs=1M if=/dev/urandom of=/home/test.xyz seek=50 count=25 > > But when I look at the file, its size is now 76.8MB! So all the Blocks after block 75 are deleted! :( > Need help, please!Try the dd manpage, which mentions: conv=CONVS convert the file as per the comma separated symbol list ... Each CONV symbol may be: ... notrunc do not truncate the output file Hence try adding conv=notrunc Paul Slootman -- IT WORKED!!! 1000 Thanks! :) David __________________________________________________________________ Deutschlands gr??te Online-Videothek schenkt Ihnen 12.000 Videos!* http://entertainment.web.de/de/entertainment/maxdome/index.html