similar to: [zfs] portable zfs send streams (preview webrev)

Displaying 20 results from an estimated 3000 matches similar to: "[zfs] portable zfs send streams (preview webrev)"

2012 Oct 12
9
[PATCH] Fits: tool to parse stream
Simple tool to parse a fits-stream from stdout. Signed-off-by: Arne Jansen <sensille@gmx.net> --- The idea of the btrfs send stream format was to generate it in a way that it is easy to receive on different platforms. Thus the proposed name FITS, for Filesystem Incremental Backup Stream. We should also build the tools to receive the stream on different platforms. As a place to collect
2013 Feb 13
1
[PATCH] Btrfs: fix crash in log replay with qgroups enabled
When replaying a log tree with qgroups enabled, tree_mod_log_rewind does a sanity-check of the number of items against the maximum possible number. It calculates that number with the nodesize of fs_root. Unfortunately fs_root is not yet set at this stage. So instead use the nodesize from tree_root, which is already initialized. Signed-off-by: Arne Jansen <sensille@gmx.net> ---
2011 Oct 06
26
[PATCH v0 00/18] btfs: Subvolume Quota Groups
This is a first draft of a subvolume quota implementation. It is possible to limit subvolumes and any group of subvolumes and also to track the amount of space that will get freed when deleting snapshots. The current version is functionally incomplete, with the main missing feature being the initial scan and rescan of an existing filesystem. I put some effort into writing an introduction into
2011 May 02
5
[PATCH v3 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates chunks on the devices in the same order. This leads to a very uneven distribution, especially with RAID1 or RAID10 and an uneven number of devices. This patch always sorts the devices before allocating, and allocates the stripes on the devices with the most available space, as long as there is enough space available. In a low
2010 May 26
14
creating a fast ZIL device for $200
Recently, I''ve been reading through the ZIL/slog discussion and have the impression that a lot of folks here are (like me) interested in getting a viable solution for a cheap, fast and reliable ZIL device. I think I can provide such a solution for about $200, but it involves a lot of development work. The basic idea: the main problem when using a HDD as a ZIL device are the cache flushes
2010 Jun 11
9
Are recursive snapshot destroy and rename atomic too?
In another thread recursive snapshot creation was found atomic so that it is done quickly, and more important, all at once or nothing at all. Do you know if recursive destroying and renaming of snapshots are atomic too? Regards Henrik Heino
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this. I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134. The zpool
2011 Apr 12
3
[PATCH v2 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates chunks on the devices in the same order. This leads to a very uneven distribution, especially with RAID1 or RAID10 and an uneven number of devices. This patch always sorts the devices before allocating, and allocates the stripes on the devices with the most available space, as long as there is enough space available. In a low
2011 Jun 10
6
[PATCH v2 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees. The intention is to use it to speed up scrub in a first run, but balance is another hot candidate. In general, every tree walk could be accompanied by a readahead. Deletion of large files comes to mind, where the fetching of the csums takes most of the time. Also the initial build-ups of free-space-caches and
2011 Jun 29
14
[PATCH v4 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees. The intention is to use it to speed up scrub in a first run, but balance is another hot candidate. In general, every tree walk could be accompanied by a readahead. Deletion of large files comes to mind, where the fetching of the csums takes most of the time. Also the initial build-ups of free-space-caches and
2010 Jun 25
13
OCZ Vertex 2 Pro performance numbers
Now the test for the Vertex 2 Pro. This was fun. For more explanation please see the thread "Crucial RealSSD C300 and cache flush?" This time I made sure the device is attached via 3GBit SATA. This is also only a short test. I''ll retest after some weeks of usage. cache enabled, 32 buffers, 64k blocks linear write, random data: 96 MB/s linear read, random data: 206 MB/s linear
2011 Mar 08
6
[PATCH v1 0/6] btrfs: scrub
This series adds an initial implementation for scrub. It works quite straightforward. The usermode issues an ioctl for each device in the fs. For each device, it enumerates the allocated device chunks. For each chunk, the contained extents are enumerated and the data checksums fetched. The extents are read sequentially and the checksums verified. If an error occurs (checksum or EIO), a good copy
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos. Regards Victor -- This message posted from opensolaris.org
2012 Oct 04
3
[PATCH] btrfs ulist use rbtree instead
From: Rock <zimilo@code-trick.com> --- fs/btrfs/backref.c | 10 ++-- fs/btrfs/qgroup.c | 16 +++--- fs/btrfs/send.c | 2 +- fs/btrfs/ulist.c | 154 +++++++++++++++++++++++++++++++++++++--------------- fs/btrfs/ulist.h | 45 ++++++++++++--- 5 files changed, 161 insertions(+), 66 deletions(-) diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c index ff6475f..a5bebc8
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi, I don''t know if it''s already been discussed here, but while thinking about using the OCZ Vertex 2 Pro SSD (which according to spec page has supercaps built in) as a shared slog and L2ARC device it stroke me that this might not be a such a good idea. Because this SSD is MLC based, write cycles are an issue here, though I can''t find any number in their spec. Why do I
2010 Jun 13
3
panic after zfs mount
Dear all We ran into a nasty problem the other day. One of our mirrored zpool hosts several ZFS filesystems. After a reboot (all FS mounted at that time an in use) the machine paniced (console output further down). After detaching one of the mirrors the pool fortunately imported automatically in a faulted state without mounting the filesystems. Offling the unplugged device and clearing the fault
2010 Mar 02
9
DO NOT REPLY [Bug 7194] New: Getting --inplace and --sparse to work together
https://bugzilla.samba.org/show_bug.cgi?id=7194 Summary: Getting --inplace and --sparse to work together Product: rsync Version: 3.0.7 Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: P3 Component: core AssignedTo: wayned at samba.org ReportedBy: jansen at
2010 Jun 16
10
At what level does the “zfs” directory exist?
I?ve posted a query regarding the visibility of snapshots via CIFS here (http://opensolaris.org/jive/thread.jspa?threadID=130577&tstart=0) however, I?m beginning to suspect that it may be a more fundamental ZFS question so I?m asking the same question here. At what level does the ?zfs? directory exist? If the ?.zfs? subdirectory only exists as the direct child of the mount point then can
2012 Aug 12
1
traverse_dataset()
Hi, I''m currently trying to understand in detail how zfs send and zfs diff work. While reading traverse_visitbp() I noticed that all data is read with dsl_read except for objsets, where dsl_read_nolock is used. Can anybody please shed a little light on why locking can be omitted here? Thanks, Arne
2010 Apr 10
21
What happens when unmirrored ZIL log device is removed ungracefully
Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log device removal that was introduced in zpool 19) In any way possible, you lose an unmirrored log device, and the OS will crash, and the whole zpool is permanently gone, even after reboots. Using opensolaris,