similar to: ZFS snapshot splitting & joining

Displaying 20 results from an estimated 20000 matches similar to: "ZFS snapshot splitting & joining"

2010 May 28
6
zfs send/recv reliability
After looking through the archives I haven''t been able to assess the reliability of a backup procedure which employs zfs send and recv. Currently I''m attempting to create a script that will allow me to write a zfs stream to a tape via tar like below. # zfs send -R pool at something | tar -c > /dev/tape I''m primarily concerned with in the possibility
2008 Oct 31
14
questions on zfs backups
On Thu, Oct 30, 2008 at 11:05 PM, Richard Elling <Richard.Elling at sun.com> wrote: > Philip Brown wrote: >> I''ve recently started down the road of production use for zfs, and am hitting my head on some paradigm shifts. I''d like to clarify whether my understanding is correct, and/or whether there are better ways of doing things. >> I have one question for
2008 Jul 29
8
questions about ZFS Send/Receive
Hi guys, we are proposing a customer a couple of X4500 (24 Tb) used as NAS (i.e. NFS server). Both server will contain the same files and should be accessed by different clients at the same time (i.e. they should be both active) So we need to guarantee that both x4500 contain the same files: We could simply copy the contents on both x4500 , which is an option because the "new
2009 Feb 17
32
Backing up ZFS snapshots
I have an OpenSolaris snv_105 server at home that holds my photos, docs, music, etc, in a zfs pool. I backup my laptops with rsync to the OpenSolaris server. All of my important data is in one place, on the OpenSolaris server. I want to backup this data. I want to protect against losing my data, and I would also like to recover previous versions of files when I make mistakes. * I do not have a
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http:// www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the letter. I tried first with a mirror zfsroot, when I try to boot to zfsboot the screen is flooded with "init(1M) exited on fatal signal 9" Than I tried with a simple zfs pool (not mirrored) and it just reboots right away. If I try to setup grub
2011 Nov 22
3
SUMMARY: mounting datasets from a read-only pool with aid of tmpfs
Hello all, I''d like to report a tricky situation and a workaround I''ve found useful - hope this helps someone in similar situations. To cut the long story short, I could not properly mount some datasets from a readonly pool, which had a non-"legacy" mountpoint attribute value set, but the mountpoint was not available (directory absent or not empty). In this case
2009 Mar 29
9
About snapshots or versioned backups
This may be a bit poorly thought through but in this case I don''t really know enough to really think it through. My back ground is linux... there I used a tool called rsnapshot which used rsync and some hardlink magic to create versioned backups. But take very little space. By versioned I don''t mean as in version control but just copies of files as they change. It worked by
2007 Oct 25
2
zfs receive - list contents of incremental stream?
Apologies up front for failing to find related posts... Am I overlooking a way to get ''zfs send -i fs at 0 fs at 1 | zfs receive -n -v ...'' to show the contents of the stream? I''m looking for the equivalent of ufsdump 1f - fs ... | ufsrestore tv - . I''m hoping that this might be a faster way than using ''find fs -newer ...'' to learn
2006 Jul 13
7
system unresponsive after issuing a zpool attach
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM partitions to ZFS. I used Live Upgrade to migrate from U1 to U2 and that went without a hitch on my SunBlade 2000. And the initial conversion of one side of the UFS mirrors to a ZFS pool and subsequent data migration went fine. However, when I attempted to attach the second side mirrors as a mirror of the ZFS pool, all
2008 Feb 21
37
Preferred backup s/w
Hi all, What is the current preferred method for backing up ZFS data pools, preferably using free ($0.00) software, and assuming that access to individual files (a la ufsbackup/ufsrestore) is required? TIA, -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer
2005 Nov 17
2
zpool iostat question
Hello ZFSland, Is there any significance in the fact that the bandwidth/read figures for a simple cpio into a ZFS filesystem should be multiples of 21.3K (when non-zero) as follows? What could determine this figure? Do I need to read a manpage? ;-) Thanks... Sean. ----- [root at global:/36g2] # zpool iostat 3 capacity operations bandwidth pool used avail read
2008 Mar 18
4
Solaris 10 x86 + ZFS / NFS server "cp" problem with AIX
Friends, I have recently built a file server on x2200 with solaris x86 having zfs (version4) and running NFS version2 & samba. the AIX 5.2 & AIX 5.2 client give error while running command "cp -R <zfs_nfs_mount_source> <zfs_nfs_mount_desticantion> as below: cp: 0653-440 directory/1: name too long. cp: 0653-438 cannot read directory directory/1. and the cp core dumps in
2009 Feb 25
7
Solaris 8/9 branded zones on ZFS root?
Hi all, I have a situation where I need to consolidate a few servers running Solaris 9 and 8. If the application doesn''t run natively on Solaris 10 or Nevada, I was thinking of using Solars 9 or 8 branded zones. My intent would be for the global zone to use ZFS boot/root; would I be correct in thinking that this will be OK for the branded zones? That is, they don''t care about
2007 Jun 21
9
Undo/reverse zpool create
Hi, If I add an entire disk to a new pool by doing "zpool create", is this reversible? I.e. if there was data on that disk (e.g. it was the sole disk in a zpool in another system) can I get this back or is zpool create destructive? Joubert This message posted from opensolaris.org
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance. I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2007 Oct 18
2
GRUB + zpool version mismatches
Apparently with zfs boot, if the zpool is a version grub doesn''t recognize, it merely ignores any zfs entries in menu.lst, and apparently instead boots the first entry it thinks it can boot. I ran into this myself due to some boneheaded mistakes while doing a very manual zfs / install at the summit. Shouldn''t it at least spit out a warning? If so, I have no issues filing a
2005 Nov 19
11
ZFS related panic!
> My current zfs setup lookst like this: > > homepool 3.63G 34.1G 8K /homepool > > homepool/db 61.6M 34.1G 8.50K /var/db > > homepool/db/pgsql 61.5M 34.1G 61.5M > > /var/db/pgsql > > homepool/home 3.57G 34.1G 10.0K /users > > homepool/home/carrie 8K 34.1G 8K > > /users/carrie > >
2010 Apr 23
5
Data movement across filesystems within a pool
I would have thought that the file movement from one FS to another within the same pool would be almost instantaneous. Why does it take to platter for such a movement? # time cp /tmp/blockfile /pcshare/1gb-tempfile real 0m5.758s # time mv /pcshare/1gb-tempfile . real 0m4.501s Both FSs are with compression=off. /tmp is RAM. -devsk -- This message posted from opensolaris.org
2007 Jul 05
17
ZFS Compression algorithms - Project Proposal
Bellow, follows a proposal for a new opensolaris project. Of course, this is open to change since I just wrote down some ideas I had months ago, while researching the topic as a graduate student in Computer Science, and since I''m not an opensolaris/ZFS expert at all. I would really appreciate any suggestion or comments. PROJECT PROPOSAL: ZFS Compression Algorithms. The main purpose of
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg