-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 This is distinct from the old mass-symlinking warnings. I run a program which promised to hardlink all the same-content files on the partition. The failure occured reasonably quickly... -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.12 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJKdEAXAAoJEE6tnN0aWvw3IgYIAKWlV6BOJs46PxXqvou1tHL5 OFnSfGsUh5NmI31cthO1M7Uzy0xQ3iZ48NINZbqb8BZac1NBUzSWNzJqqiDpFLb5 33m8hfLvcVxbGAutm7hhp5NCK8VUw9ej4ZFLU5CE9ViyT4dFyaZA6osMpJ5lueIw RnBdJStao01a7cMyEbY3WqUE/KxoPRU2s4cC1GuV5iHKL6iur9lGHOEQxcpL91yM 7Jw2pNLYGZ/LYToigVtPs2ZHSPlc/SKTBu9pDbRHmTebZ8LUBZQ/KCnaStm+x8ei dRx5ERmhRQWg4q09eGrQ8Bh0H2v68brReVf0Sx/QWaS3NLmeozbuubqZI8fvVOQ=Vvs9 -----END PGP SIGNATURE-----
On Sat, Aug 01, 2009 at 05:16:11PM +0400, Raskin Michael wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > This is distinct from the old mass-symlinking warnings. I run a program > which promised to hardlink all the same-content files on the partition. > The failure occured reasonably quickly...As Yan said on IRC there''s a limit to the number of hardlinks per file in a given directory. We clearly need to change this from BUG() to return a nice error. -chris> -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.0.12 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJKdEAXAAoJEE6tnN0aWvw3IgYIAKWlV6BOJs46PxXqvou1tHL5 > OFnSfGsUh5NmI31cthO1M7Uzy0xQ3iZ48NINZbqb8BZac1NBUzSWNzJqqiDpFLb5 > 33m8hfLvcVxbGAutm7hhp5NCK8VUw9ej4ZFLU5CE9ViyT4dFyaZA6osMpJ5lueIw > RnBdJStao01a7cMyEbY3WqUE/KxoPRU2s4cC1GuV5iHKL6iur9lGHOEQxcpL91yM > 7Jw2pNLYGZ/LYToigVtPs2ZHSPlc/SKTBu9pDbRHmTebZ8LUBZQ/KCnaStm+x8ei > dRx5ERmhRQWg4q09eGrQ8Bh0H2v68brReVf0Sx/QWaS3NLmeozbuubqZI8fvVOQ> =Vvs9 > -----END PGP SIGNATURE-----> [102404.782966] BUG: unable to handle kernel NULL pointer dereference at (null) > [102404.782982] IP: [<f9ea4ead>] PageUptodate+0x11/0x39 [btrfs] > [102404.783028] *pdpt = 000000002270d001 *pde = 0000000000000000 > [102404.783034] Oops: 0000 [#1] SMP > [102404.783039] last sysfs file: /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq > [102404.783045] Modules linked in: arc4 ecb ath5k mac80211 led_class ath cfg80211 rfkill usblp raid456 raid6_pq async_xor async_memcpy async_tx xor raid1 raid0 af_packet md_mod snd_pcm_oss snd_mixer_oss video output ipv6 ipip tunnel4 radeon drm snd_hda_codec_si3054 snd_hda_codec_realtek snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_timer snd soundcore snd_page_alloc ftdi_sio usbserial loop ati_agp agpgart p4_clockmod speedstep_lib 8139too mii kqemu fuse thermal thermal_sys hwmon ac battery tun usb_storage usb_libusual dm_mod ide_generic ide_gd_mod pata_marvell ata_piix sata_uli sata_sis pata_sis sata_via sata_nv xtkbd atkbd ohci_hcd ssb pcmcia pcmcia_core firmware_class ehci_hcd uhci_hcd usbhid hid usbcore unix btrfs zlib_deflate libcrc32c crc32c sd_mod crc_t10dif jfs nls_base xfs exportfs ext3 jbd mbcache synaptics_i2c sermouse psmouse libps2 pcips2 i8042 serio evdev mousedev ide_cd_mod ide_core pktcdvd ahci sr_mod cdrom pata_atiixp libata scsi_mod radeonfb fb_ddc backlight i2c_algo_bit cfbcopyarea i2c_> core cfbimgblt cfbfillrect fbcon fbdev tileblit font bitblit fbcon_rotate fbcon_cw fbcon_ud fbcon_ccw softcursor fb > [102404.783172] > [102404.783178] Pid: 1355, comm: nix-store Not tainted (2.6.31-rc4 #1) X51RL > [102404.783182] EIP: 0060:[<f9ea4ead>] EFLAGS: 00210246 CPU: 0 > [102404.783219] EIP is at PageUptodate+0x11/0x39 [btrfs] > [102404.783223] EAX: 00000000 EBX: 00000000 ECX: f3c48112 EDX: 00000012 > [102404.783227] ESI: efbded70 EDI: 000000ba EBP: d0aa1cb0 ESP: d0aa1cb0 > [102404.783230] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 > [102404.783235] Process nix-store (pid: 1355, ti=d0aa1000 task=eefd0000 task.ti=d0aa1000) > [102404.783238] Stack: > [102404.783240] d0aa1cd0 f9ea608b f4d85e10 efbded70 000fbeb0 04150f46 efbded70 00000001 > [102404.783249] <0> d0aa1d28 f9e5e367 000000a8 04150f46 0000007e 00000000 f3c65690 f580f800 > [102404.783258] <0> f51f3360 00000010 00000040 d0aa1d10 c103e301 c103e655 00000163 00000010 > [102404.783268] Call Trace: > [102404.783313] [<f9ea608b>] ? copy_extent_buffer+0xb0/0x197 [btrfs] > [102404.783352] [<f9e5e367>] ? copy_for_split+0xd6/0x383 [btrfs] > [102404.783361] [<c103e301>] ? kunmap_atomic+0x75/0xaf > [102404.783366] [<c103e655>] ? kmap_atomic+0x22/0x32 > [102404.783409] [<f9ea5be7>] ? write_extent_buffer+0x1cc/0x205 [btrfs] > [102404.783448] [<f9e610e2>] ? split_leaf+0x6e0/0x7a1 [btrfs] > [102404.783488] [<f9e63849>] ? btrfs_search_slot+0x8d9/0x9fc [btrfs] > [102404.783527] [<f9e63c77>] ? btrfs_insert_empty_items+0x58/0xea [btrfs] > [102404.783568] [<f9e7743d>] ? btrfs_insert_inode_ref+0x94/0x285 [btrfs] > [102404.783610] [<f9e89f8b>] ? btrfs_add_link+0xdf/0x154 [btrfs] > [102404.783653] [<f9e8a032>] ? btrfs_add_nondir+0x32/0x90 [btrfs] > [102404.783695] [<f9e8ad49>] ? btrfs_link+0x10f/0x23c [btrfs] > [102404.783702] [<c1221665>] ? security_inode_permission+0x46/0x56 > [102404.783719] [<c118801b>] ? vfs_link+0x163/0x221 > [102404.783725] [<c118c622>] ? sys_linkat+0x182/0x225 > [102404.783731] [<c1175135>] ? sys_fchmodat+0x125/0x13d > [102404.783736] [<c117d9c6>] ? sys_lstat64+0x4c/0x60 > [102404.783742] [<c118c6e7>] ? sys_link+0x22/0x32 > [102404.783748] [<c1005543>] ? sysenter_do_call+0x12/0x28 > [102404.783751] Code: 0c 8d 65 f4 5b 5e 5f 5d c3 55 89 e5 b8 fc aa ee f9 e8 c1 4c 23 c7 5d c3 90 90 55 89 e5 83 05 18 2b f0 f9 01 83 15 1c 2b f0 f9 00 <8b> 00 c1 e8 03 83 e0 01 74 1c 83 05 20 2b f0 f9 01 83 15 24 2b > [102404.783800] EIP: [<f9ea4ead>] PageUptodate+0x11/0x39 [btrfs] SS:ESP 0068:d0aa1cb0 > [102404.783838] CR2: 0000000000000000 > [102404.783843] ---[ end trace 5618f0a7ac8fd890 ]--- >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Mason wrote:> On Sat, Aug 01, 2009 at 05:16:11PM +0400, Raskin Michael wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> This is distinct from the old mass-symlinking warnings. I run a program >> which promised to hardlink all the same-content files on the partition. >> The failure occured reasonably quickly... > > As Yan said on IRC there''s a limit to the number of hardlinks per file > in a given directory. We clearly need to change this from BUG() to > return a nice error.BTW, what limit is that? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 3 Aug 2009, Tomasz Chmielewski wrote:> Chris Mason wrote: >> As Yan said on IRC there''s a limit to the number of hardlinks per file >> in a given directory. We clearly need to change this from BUG() to >> return a nice error. > BTW, what limit is that?272 links. Creating 273-th link causes BUG(). The limit seems so arbitrary that it maybe can be made higher.. 32-bit (billions of links) seem totally unrestrictive.. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Mikhail Raskin <raskin@mccme.ru> writes:> On Mon, 3 Aug 2009, Tomasz Chmielewski wrote: >> BTW, what limit is that? > > 272 links. Creating 273-th link causes BUG(). The limit seems so > arbitrary that it maybe can be made higher.. 32-bit (billions of > links) seem totally unrestrictive..I just ran into the max hard link per directory limit, and remembered this thread. I get EMLINK when trying to create more than 311 (not 272) links in a directory, so at least the BUG() is fixed. What is the reason for the limit, and is there any chance of increasing it to something more reasonable as Mikhail suggested? For comparison I tried to create 200k hardlinks to the the same file in the same directory on btrfs, ext4, reiserfs and xfs: fs limit -- ----- btrfs 311 reiser 64535 ext4 65000 xfs higher than 200000, if there is a limit Regards, Pär Andersson
On Sun, Oct 11, 2009 at 11:05 PM, Pär Andersson <paran@lysator.liu.se> wrote:> Mikhail Raskin <raskin@mccme.ru> writes: > >> On Mon, 3 Aug 2009, Tomasz Chmielewski wrote: >>> BTW, what limit is that? >> >> 272 links. Creating 273-th link causes BUG(). The limit seems so >> arbitrary that it maybe can be made higher.. 32-bit (billions of >> links) seem totally unrestrictive.. > > I just ran into the max hard link per directory limit, and remembered > this thread. I get EMLINK when trying to create more than 311 (not 272) > links in a directory, so at least the BUG() is fixed.The max number of hard link is depend on total length of hard link names.> > What is the reason for the limit, and is there any chance of increasing > it to something more reasonable as Mikhail suggested?The limit is imposed by the format of inode back references. We can get rid of the limit, but it requires a disk format change. Regards Yan, Zheng -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Yan, Zheng wrote:>> What is the reason for the limit, and is there any chance of increasing >> it to something more reasonable as Mikhail suggested? > > The limit is imposed by the format of inode back references. We can > get rid of the limit, but it requires a disk format change.Please do get rid of this limit, it''s ridiculously small. Of course, not necessarily right now, but when you introduce some other changes needing disk format change, please think of removing the hard link limit as well. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Oct 12, 2009 at 10:07:43AM +0200, Tomasz Chmielewski wrote:> Yan, Zheng wrote: > > >>What is the reason for the limit, and is there any chance of increasing > >>it to something more reasonable as Mikhail suggested? > > > >The limit is imposed by the format of inode back references. We can > >get rid of the limit, but it requires a disk format change. > > Please do get rid of this limit, it''s ridiculously small. > > Of course, not necessarily right now, but when you introduce some > other changes needing disk format change, please think of removing > the hard link limit as well.Please keep in mind this is only a limit on the number of links to a single file where the links and the file are all in the same directory. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Pär Andersson wrote:> I just ran into the max hard link per directory limit, and remembered > this thread. I get EMLINK when trying to create more than 311 (not 272) > links in a directory, so at least the BUG() is fixed. > > What is the reason for the limit, and is there any chance of increasing > it to something more reasonable as Mikhail suggested? > > For comparison I tried to create 200k hardlinks to the the same file in > the same directory on btrfs, ext4, reiserfs and xfs:what real-world application uses and needs this many hard links? jim -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Monday 12 October 2009, jim owens wrote:> Pär Andersson wrote: > > I just ran into the max hard link per directory limit, and remembered > > this thread. I get EMLINK when trying to create more than 311 (not 272) > > links in a directory, so at least the BUG() is fixed. > > > > What is the reason for the limit, and is there any chance of increasing > > it to something more reasonable as Mikhail suggested? > > > > For comparison I tried to create 200k hardlinks to the the same file in > > the same directory on btrfs, ext4, reiserfs and xfs: > > what real-world application uses and needs this many hard links? > > jimFor me 311 "hard link to the same file under the same directory" limit is not so high. I don''t know a software which need so many hard links. But it easy to find some similar cases. For example under my "/usr/bin" I have 478 _"soft links"_ to _different_ files. $ find /usr/bin/ -type l | wc -l 478 When a directory is created, its ".." entry is a hard link to the "parent" directory. For example the /usr/share/doc directory has 2828 "hard links" because it has 2826 children directories. $ ls -ld /usr/share/doc drwxr-xr-x 2828 root root 12288 2009-08-20 19:03 /usr/share/doc $ ls -ld /usr/share/* | egrep "^d" | wc -l 2826 These cases are different cases. But the 311 "hard link to the same file under the same directory" limit may be too strong. Not now but in the next format change I think that it would be useful to remove this limit. BR Goffredo -- gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijackATinwind.it> Key fingerprint = 4769 7E51 5293 D36C 814E C054 BF04 F161 3DC5 0512 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Oct 12, 2009, at 12:16 PM, jim owens wrote:> Pär Andersson wrote: >> I just ran into the max hard link per directory limit, and remembered >> this thread. I get EMLINK when trying to create more than 311 (not >> 272) >> links in a directory, so at least the BUG() is fixed. >> What is the reason for the limit, and is there any chance of >> increasing >> it to something more reasonable as Mikhail suggested? >> For comparison I tried to create 200k hardlinks to the the same >> file in >> the same directory on btrfs, ext4, reiserfs and xfs: > > what real-world application uses and needs this many hard links? > > jimI don''t think that''s a good counterargument for why this is not a bug. Can''t think of any off the top of my head for Linux, but definitely in OS X Time Machine can easily create 200+ hardlinks.-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Goffredo Baroncelli wrote:> I don''t know a software which need so many hard links. But it easy to find > some similar cases. > > For example under my "/usr/bin" I have 478 _"soft links"_ to _different_ > files.Hard link is not used in place of soft link... soft link is a different and preferred addition to posix style systems. So don''t think we need more hard links just because you find apps using soft links.> When a directory is created, its ".." entry is a hard link to the "parent" > directory. For example the /usr/share/doc directory has 2828 "hard links" > because it has 2826 children directories.Max subdirectories per directory is again a different feature. btrfs does not use "hard link count" for subdirectories. That association of "hard links-2" == "max subdirs" is only a legacy of the design of some filesystems such as UFS.> These cases are different cases. But the 311 "hard link to the same file under > the same directory" limit may be too strong. Not now but in the next format > change I think that it would be useful to remove this limit.I would agree if the cost was 0, but it increases a field size so it would be nice to have a justified need. But it is Chris''s call. jim -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
John Dong wrote:> > I don''t think that''s a good counterargument for why this is not a bug.it is not a "bug". hard links are not a required feature of all filesystems nor is a defined large number required for those with hard links.> Can''t think of any off the top of my head for Linux, but definitely in > OS X Time Machine can easily create 200+ hardlinks.so 311 is 50% than the app uses... plenty of growth. jim -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Oct 12, 2009 at 02:17:11PM -0400, jim owens wrote:> John Dong wrote: > > > >I don''t think that''s a good counterargument for why this is not a bug. > > it is not a "bug". hard links are not a required feature of all > filesystems nor is a defined large number required for those with > hard links. > > >Can''t think of any off the top of my head for Linux, but > >definitely in OS X Time Machine can easily create 200+ hardlinks. > > so 311 is 50% than the app uses... plenty of growth.Just to clarify again, the max link count on btrfs is 2^32. The lower limit is only in place on links to the same file in the same directory. Jim is correct about the link count on subdirs being unrelated. The link count on btrfs directories is always one. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
jim owens wrote:> Pär Andersson wrote: >> I just ran into the max hard link per directory limit, and remembered >> this thread. I get EMLINK when trying to create more than 311 (not 272) >> links in a directory, so at least the BUG() is fixed. >> >> What is the reason for the limit, and is there any chance of increasing >> it to something more reasonable as Mikhail suggested? >> >> For comparison I tried to create 200k hardlinks to the the same file in >> the same directory on btrfs, ext4, reiserfs and xfs: > > what real-world application uses and needs this many hard links?The number of links depends on the length of a filename. Is _13_ (yes, thirteen) hardlinks in a directory a big number? I don''t think so. On systems storing user data, I regularly see user files having maximum length: mostly files and/or directories saved by users using a webbrowser - the files take their name from the website title, and these titles can be really long. Consider that most sides of the world use UTF-8 characters, so the maximum file name is easily achieved. Below, we hit this limit with just 13 hardlinks - it''s not me to decide if 13 is "this many hard links". cd /tmp dd if=/dev/zero of=btrfs.img bs=1M count=400 mkfs.btrfs btrfs.img mount -o loop btrfs /mnt/btrfs/ touch aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa i=1 ; while [ $i -ne 40 ] ; do> ln $LNFILE $i$LNFILE > echo $i > i=$((i+1)) > done1 2 3 4 5 6 7 8 9 10 11 12 13 ln: Erzeuge harte Verknüpfung „14aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“ ⇒ „aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“: Datei oder Verzeichnis nicht gefunden 14 ln: Erzeuge harte Verknüpfung „15aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“ ⇒ „aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“: Datei oder Verzeichnis nicht gefunden 15 ln: Erzeuge harte Verknüpfung „16aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“ ⇒ „aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“: Datei oder Verzeichnis nicht gefunden 16 Message from syslogd@dom at Mon Oct 12 22:31:49 2009 ... dom klogd: [ 9657.948456] Oops: 0000 [#1] SMP Getötet 17 Message from syslogd@dom at Mon Oct 12 22:31:49 2009 ... dom klogd: [ 9657.948459] last sysfs file: /sys/devices/system/cpu/cpu7/cache/index2/shared_cpu_map Message from syslogd@dom at Mon Oct 12 22:31:49 2009 ... dom klogd: [ 9657.948574] Stack: Message from syslogd@dom at Mon Oct 12 22:31:49 2009 ... dom klogd: [ 9657.948589] Call Trace: -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
I believe one hard-link should be the maximum. berk -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Monday 12 October 2009, Chris Mason wrote:> On Mon, Oct 12, 2009 at 02:17:11PM -0400, jim owens wrote: > > John Dong wrote: > > >I don''t think that''s a good counterargument for why this is not a bug. > > > > it is not a "bug". hard links are not a required feature of all > > filesystems nor is a defined large number required for those with > > hard links. > > > > >Can''t think of any off the top of my head for Linux, but > > >definitely in OS X Time Machine can easily create 200+ hardlinks. > > > > so 311 is 50% than the app uses... plenty of growth. > > Just to clarify again, the max link count on btrfs is 2^32. The lower > limit is only in place on links to the same file in the same directory. >Hi Chris and all, I''ve made a quick test and managed to create many more links to the same file in the *same* directory on other filsystems: XFS can do at least 100000, probably more; Reiserfs did 64535; ext3 managed to do 32000; ext4 did 65000. While I agree it might be a bit stupid to create so many hardlinks to the same file on the same directory, this issue can be seen as one of "backward compatibility" with other widely used and established Linux filesystems. Despite it being stupid or not, the fact is that I''ve seen some crazy stuff along the years working with Unix, so people will expect this kind of things to *not* break when they switch from their old filesystems to shiny new btrfs. The fact being that this limit is way lower than on other filesystems (we''re talking 2 orders of magnitude, at best!), I too suggest that the limit should be increased. Not being critical, it might be done when some other features require a format change but, nonetheless, should be done for the sake of avoiding breakage on existing systems. Best regards, and thanks for your hard work. Cláudio -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
jim owens wrote:> Pär Andersson wrote:snip...> what real-world application uses and needs this many hard links?I don''t know about hundreds of thousands of hard links, but doesn''t busybox use large numbers of hard links in the same directories (eg one for everything in /bin)? jim owens wrote: snip...> so 311 is 50% than the app uses... plenty of growth.That would be a worryingly small number even if it weren''t less for long filenames, and even if someone out there weren''t inevitably using more. Should scalability limits this hard to change really be set this close to real-world use cases? A format change now is easier than a format change when everyone is using BTRFS for their root filesystem. -Anthony -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
* Chris Mason:>> Of course, not necessarily right now, but when you introduce some >> other changes needing disk format change, please think of removing >> the hard link limit as well. > > Please keep in mind this is only a limit on the number of links to a > single file where the links and the file are all in the same directory.So to work around this limit, I could do something like this? mkdir tmp.$$ ln a tmp.$$/b ln tmp.$$/b . rm tmp.$$/b rmdir tmp.$$ (Instead of "ln a b".) What am I missing? -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Oct 13, 2009 at 3:07 PM, Florian Weimer <fweimer@bfk.de> wrote:> * Chris Mason: >>> Of course, not necessarily right now, but when you introduce some >>> other changes needing disk format change, please think of removing >>> the hard link limit as well. >> >> Please keep in mind this is only a limit on the number of links to a >> single file where the links and the file are all in the same directory. > So to work around this limit, I could do something like this? > mkdir tmp.$$ > ln a tmp.$$/b > ln tmp.$$/b .If "ln a b" doesn''t work, this "ln" won''t either. Yan, Zheng -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2009/10/12 John Dong <jdong@ubuntu.com>:> > On Oct 12, 2009, at 12:16 PM, jim owens wrote: > >> Pär Andersson wrote: >>> >>> I just ran into the max hard link per directory limit, and remembered >>> this thread. I get EMLINK when trying to create more than 311 (not 272) >>> links in a directory, so at least the BUG() is fixed. >>> What is the reason for the limit, and is there any chance of increasing >>> it to something more reasonable as Mikhail suggested? >>> For comparison I tried to create 200k hardlinks to the the same file in >>> the same directory on btrfs, ext4, reiserfs and xfs: >> >> what real-world application uses and needs this many hard links? >> >> jim > > I don''t think that''s a good counterargument for why this is not a bug. > > Can''t think of any off the top of my head for Linux, but definitely in OS X > Time Machine can easily create 200+ hardlinks.-- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >As a lurker, I''ve actually got a real-world example of something I do that would probably hit this. Its was hinted at before - web urls are sometimes rediculously long. I run a web archiver on my router box that saves every http url I hit to a file named after its url with a date appended. But then I periodically run a de-duplicator on the saved stuff, which hard-links together all files with the same contents (except empty ones) I bet there are lots of examples that would exceed this limit within those dirs. -- Brian_Brunswick____brian@ithil.org____Wit____Disclaimer____!Shortsig_rules! -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>>> this thread. I get EMLINK when trying to create more than 311 (not 272) >>> links in a directory >> >> what real-world application uses and needs this many hard links? > > I don''t think that''s a good counterargument for why this is not a bug.I strongly agree. Our ignorance of users operating inside existing limits of existing file systems shouldn''t justify tightening those limits. Sure, this is a weird corner case. But given how early btrfs is in the deployment stage, it seems worth changing the format to get rid of this risk of teaching people to question their expectation that btrfs will just work in environments that previous linux file systems worked in. - z -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Oct 13, 2009 at 10:45:43AM -0700, Zach Brown wrote:> > >>> this thread. I get EMLINK when trying to create more than 311 (not 272) > >>> links in a directory > >> > >> what real-world application uses and needs this many hard links? > > > > I don''t think that''s a good counterargument for why this is not a bug. > > I strongly agree. Our ignorance of users operating inside existing > limits of existing file systems shouldn''t justify tightening those limits. > > Sure, this is a weird corner case. But given how early btrfs is in the > deployment stage, it seems worth changing the format to get rid of this > risk of teaching people to question their expectation that btrfs will > just work in environments that previous linux file systems worked in.This hasn''t been at the top of my list for a while, I remember a bunch of planning sessions where you weren''t worried about it ;) But, we can look at ways to resolve it in the future. My big concern right now is the enospc support, but there is room to update this without forcing a full format change. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> This hasn''t been at the top of my list for a while, I remember a bunch > of planning sessions where you weren''t worried about it ;)Yeah, no doubt. I go back and forth :) - z -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Mason <chris.mason@oracle.com> writes:> Please keep in mind this is only a limit on the number of links to a > single file where the links and the file are all in the same directory.For the record, the nnmaildir mail backend in Gnus (an Emacs package for reading news and email) creates multiple hardlinks to the same file in the same directory. I had several thousands hardlinks at one time. Gnus/nnmaildir uses hardlinks to keep track of attributes of email messages. For example, to denote that the email stored in file FOO has been read, nnmaildir creates a link called marks/read/FOO, linking to an empty file. The rationale for this mechanism is that 1) you don''t want to modify the email message itself; 2) storing marks in separate files allows concurrent accesses to the mail spool without locking; and 3) using a hardlink rather than a new empty file saves an inode. I am not saying that what Gnus does is particularly smart, but this is an example of a real world application that may break under btrfs. Regards, Matteo Frigo -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Oct 13, 2009 at 08:29:02PM -0400, Matteo Frigo spake thusly:> Chris Mason <chris.mason@oracle.com> writes: > > > Please keep in mind this is only a limit on the number of links to a > > single file where the links and the file are all in the same directory. > > For the record, the nnmaildir mail backend in Gnus (an Emacs package > for reading news and email) creates multiple hardlinks to the same > file in the same directory. I had several thousands hardlinks at one > time.I just found out that my company uses BackupPC for backups. It uses hard links extensively: Features include: * A clever pooling scheme minimizes disk storage and disk I/O. Identical files across multiple backups of the same or different PC are stored only once (using hard links), resulting in substantial savings in disk storage and disk writes. "clever" indeed. It creates filesystems with zillions of inodes which are a pain to work with. This is the sort of large storage application I would be looking to use btrfs for and apparently the currently implementation would croak. -- Tracy Reed http://tracyreed.org
Tracy Reed wrote:> On Tue, Oct 13, 2009 at 08:29:02PM -0400, Matteo Frigo spake thusly: >> Chris Mason <chris.mason@oracle.com> writes: >> >>> Please keep in mind this is only a limit on the number of links to a >>> single file where the links and the file are all in the same directory. >> For the record, the nnmaildir mail backend in Gnus (an Emacs package >> for reading news and email) creates multiple hardlinks to the same >> file in the same directory. I had several thousands hardlinks at one >> time. > > I just found out that my company uses BackupPC for backups. It uses > hard links extensively: > > Features include: > > * A clever pooling scheme minimizes disk storage and disk > I/O. Identical files across multiple backups of the same or > different PC are stored only once (using hard links), resulting > in substantial savings in disk storage and disk writes. > > "clever" indeed. It creates filesystems with zillions of inodes which > are a pain to work with.What''s hard to work with here?> This is the sort of large storage application > I would be looking to use btrfs for and apparently the currently > implementation would croak.Nope, they are created in separate directories, so no worries here. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
* [Tracy Reed]> "clever" indeed. It creates filesystems with zillions of inodes which > are a pain to work with. This is the sort of large storage application > I would be looking to use btrfs for and apparently the currently > implementation would croak.As I understand it, the current implementation shouldn''t croak unless you keep a few hundred copies of the same file in one directory being backed up, since the limit is apparently on hard links to the same file in the same directory. At least the last time I used it, BackupPC would make a new tree for each backup (with hard links to the pool), so you shouldn''t hit this limit in the normal case. However, speaking of BackupPC, it occurs to me that in the context of btrfs, that kind of storage strategy looks fairly outmoded anyway, and could benefit from using the block level copy-on-write features already present in the file system (with a bit of block based data de-duplication thrown in for good measure). Øystein -- My coat? Oh, I left it in the bike shed.. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Matteo Frigo <athena@fftw.org> writes:> For the record, the nnmaildir mail backend in Gnus (an Emacs package > for reading news and email) creates multiple hardlinks to the same > file in the same directory. I had several thousands hardlinks at one > time.Gnus with nnmaildir is what I use. I wanted to test how much performance and disk space that would be gained with my Maildirs on btrfs, but I hit this bug instead.> I am not saying that what Gnus does is particularly smart, but this is > an example of a real world application that may break under btrfs.Not smart at all, and the amount of hard links is not the only problem with nnmaildir. But on other file systems at least it works. Regards, Pär Andersson