similar to: Is there any performance problem with hard links in ZFS?

Displaying 20 results from an estimated 8000 matches similar to: "Is there any performance problem with hard links in ZFS?"

2006 Apr 26
1
re-linking hard links
Hello, I have a situation where I have numerous files with numerous hard links to each of them on an ext3 RHEL4.2 system. Some of these files are duplicates of the others. I would like to re-link all of the duplicates to point to a single inode. For instance if file1 has hardlinks link1 and link2, and file2 has hardlinks link3 and link4, I need to change it so that link1, link2 (these
2009 May 12
1
--copy-links and --hard-links
Hi, I want to use rsync in a may be unusual way: I have a source tree containing lots of symbolic links and I use the option "--copy-links" to get the physical files (the referents of the symlinks) on the target host. As the host uses the synchronized files in a read-only fashion, I also want to get hardlinks for all identical files, to save space. Thus I also use
2010 Nov 24
8
hard links across snapshots/subvolumes are actually a bad idea.
I''ve been thinking about this for a while, from a perspective of how to make it work by allocating i-node numbers from a global pool, but yesterday I realized that offering the feature would be a bad idea because it violates the semantics of file systems. I will be happy to expand on that point if anyone disagrees with it. dln -- "It is merely a matter of persistence." --
2007 Jul 10
17
all open files
Hi All, Is there a simple way to list all currently open file descriptors ? TIA.. Regards, Venkat -- This message posted from opensolaris.org
2016 Sep 23
1
Re: [PATCH v3 1/3] New API: internal_find_block
On Tuesday, 20 September 2016 16:19:30 CEST Matteo Cafasso wrote: > + for (index = 0; index < count; index++) { > + fsattr = tsk_fs_file_attr_get_idx (fsfile, index); > + > + if (fsattr != NULL && fsattr->flags & TSK_FS_ATTR_NONRES) > + tsk_fs_attr_walk (fsattr, flags, attrwalk_callback, (void *) &blkdata); The return code of tsk_fs_attr_walk must
2004 Sep 10
1
not always making hard links?
I'm using 2.6.3pre1 to transfer a rather large Debian archive (126GB, more than 30 million files). It contains about 450 daily snapshots, where unchanged files are hardlinked between the snapshots (so many files have hunderds of links). It's been running for some time now, and I found that while it's far from done, it's already used 165GB on the receiving end. Investigation shows
2016 Sep 20
1
Re: [PATCH v2 1/3] New API: internal_find_block
On Monday, 19 September 2016 23:26:57 CEST Matteo Cafasso wrote: > The internal_find_block command searches all entries referring to the > given filesystem data block and returns a tsk_dirent structure > for each of them. > > For filesystems such as NTFS which do not delete the block mapping > when removing files, it is possible to get multiple non-allocated > entries for the
2016 Sep 19
5
[PATCH v2 0/3] New API - find_block
v2: - use boolean field in struct - move refactoring to previous series Matteo Cafasso (3): New API: internal_find_block New API: find_block find_block: added API tests daemon/tsk.c | 90 ++++++++++++++++++++++++++++++++++++++++++++ generator/actions.ml | 25 ++++++++++++ src/MAX_PROC_NR | 2 +- src/tsk.c | 17 +++++++++
2017 Feb 15
1
There is problem of rsync with options --hard-links --inplace.
There is the problem which I discribed here https://bugs.centos.org/view.php?id=12820. rsync does not break hard-link into destination if hard-link has be broken in source with option inplace. The problem remains in the latest version of rsync? -- View this message in context: http://samba.2283325.n4.nabble.com/There-is-problem-of-rsync-with-options-hard-links-inplace-tp4714872.html Sent from
2005 Jun 07
1
Hard links revisited
I can't seem to get the hardlinks to work in windows via --link-dest. It just refuses to find the files in the directory and make a hard link. Here's the setup. -I run 2 rsyncs from batch files called sync.bat and sync2.bat. -I'm connected to the server via a mapped network drive, we'll call it 'z'. -I then run the client, on the destination, once to pull the data
2016 Sep 20
5
[PATCH v3 0/3] New API - find_block
v3: - fixed attribute walk callback: checking against TSK_FS_BLOCK_FLAG_RAW flag would exclude compressed data blocks which are still important. Yet we want to exclude sparse blocks (TSK_FS_BLOCK_FLAG_SPARSE) as they are not stored on the disk. Matteo Cafasso (3): New API: internal_find_block New API: find_block find_block: added API tests daemon/tsk.c | 91
2007 Apr 09
5
CAD application not working with zfs
Hello, was use several cad applications and with one of those we have problems using zfs. OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the cad application is catia v4. There are several configuration and data files stored on the server and shared via nfs to solaris and aix clients. The application is crashing on the aix client except the server is sharing those files from a ufs
2016 Feb 08
2
Lots of zero-byte hard link files in cur (and new/tmp), cannot see messages in folder
Hello, I have an el-cheapo shared hosting account on Dreamhost, and have had it for a very long time. For the most part everything usually works fairly well, considering I do keep a lot of folders, and mail, on some of my accounts. They are running dovecot, but still don't have a response as to the version, or doveconf -n output yet. My problem is, one of my most used folders, which was
2007 Mar 12
9
X2200-M2
After the interesting revelations about the X2100 and it''s hot-swap abilities, what are the abilities of the X2200-M2''s disk subsystem, and is ZFS going to tickle any wierdness out of them? -brian -- "The reason I don''t use Gnome: every single other window manager I know of is very powerfully extensible, where you can switch actions to different mouse buttons.
2010 Jul 16
2
Problem with hard links in lda - please help
Hello, I'm trying to enable hardlinks for messages sent to mutiple users. (I need this because I have mailing lists with 5000 users used many times a day). I've read that, to do this, I have to write a script that uses the /usr/local/libexec/dovecot/deliver command in this way: deliver -p <FILE> -d <USER1> -d <USER2> Of course, I enabled the "socket listen"
2009 May 12
4
Controlling outbound bandwidth utilization by port
Among other things, I run an http server on my home DSL line (6M/768kbit). The content includes several large image galleries, and when certain crawlers hit our server w/ multiple large image uploads, we end up with large ping time delays - sufficient to disrupt the kids'' on-line gaming. Attempts to control this with robots.txt has not be very successful; Solaris IPQoS appears quite
2010 Mar 02
3
BackupPC, per-dir hard link limit, Debian packaging
I realise that the hard link limit is in the queue to fix, and I read the recent thread as well as the older (october I think) thread. I just wanted to note that BackupPC *does* in fact run into the hard link limit, and its due to the dpkg configuration scripts. BackupPC hard links files with the same content together by scanning new files and linking them together, whether or not they started
2003 Nov 26
2
Test case for hard link failure
The rsync 2.5.6 TODO file mentions the need for hard link test cases. Here is one in which a linked file is unnecessarily transferred in full. # Setup initial directories mkdir src dest dd if=/dev/zero bs=1024 count=10000 of=src/a 2>/dev/null rsync -a src/. dest/. ln src/a src/b # At this point, a & b exist in src; only a exists in dest. rsync -aHv src/. dest/.
2010 Jan 13
1
cannot backup to external disk with rsync, create hard links of add w permissions
Dear rsync users, I want to backup my Documents directory and instead of copying everything to my external disk I tried to use this command line to create an incremental backup with rsync on my external disk: $cd /media/My\ Passport $rsync -a --delete --link-dest=../BackUp08.2009_Ubuntu/ /home/thomas2/Documents/ BackUp01.2010_Ubuntu/ but I get the following output: rsync: chgrp
2010 Aug 02
10
Number of hard links limit
Hi, There''s been discussion before on this list on the very small number of hard links supported by btrfs.[1][2] In those threads, an often asked question has been if there''s a real world use case the limit breaks. Also it has been pointed out that a fix for this would need a disk format change. As discussed in bug #15762 [3], there are certainly real-world use cases this