I searched and searched but was not able to find your added text in
this long quoted message. Please re-submit using the english language
in simple ASCII text intended for humans.
Thanks,
Bob
On Wed, 30 Jun 2010, Eric Andersen wrote:
>
> On Jun 28, 2010, at 10:03 AM, zfs-discuss-request at opensolaris.org wrote:
>
>> Send zfs-discuss mailing list submissions to
>> zfs-discuss at opensolaris.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> or, via email, send a message with subject or body
''help'' to
>> zfs-discuss-request at opensolaris.org
>>
>> You can reach the person managing the list at
>> zfs-discuss-owner at opensolaris.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of zfs-discuss digest..."
>>
>>
>> Today''s Topics:
>>
>> 1. Re: ZFS bug - should I be worried about this? (Gabriele Bulfon)
>> 2. Re: ZFS bug - should I be worried about this? (Victor Latushkin)
>> 3. Re: OCZ Vertex 2 Pro performance numbers (Frank Cusack)
>> 4. Re: ZFS bug - should I be worried about this? (Garrett
D''Amore)
>> 5. Announce: zfsdump (Tristram Scott)
>> 6. Re: Announce: zfsdump (Brian Kolaci)
>> 7. Re: zpool import hangs indefinitely (retry post in parts; too
>> long?) (Andrew Jones)
>> 8. Re: Announce: zfsdump (Tristram Scott)
>> 9. Re: Announce: zfsdump (Brian Kolaci)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Mon, 28 Jun 2010 05:16:00 PDT
>> From: Gabriele Bulfon <gbulfon at sonicle.com>
>> To: zfs-discuss at opensolaris.org
>> Subject: Re: [zfs-discuss] ZFS bug - should I be worried about this?
>> Message-ID: <593812734.121277727391600.JavaMail.Twebapp at
sf-app1>
>> Content-Type: text/plain; charset=UTF-8
>>
>> Yes...they''re still running...but being aware that a power
failure causing an unexpected poweroff may make the pool unreadable is a
pain....
>>
>> Yes. Patches should be available.
>> Or adoption may be lowering a lot...
>> --
>> This message posted from opensolaris.org
>>
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Mon, 28 Jun 2010 18:14:12 +0400
>> From: Victor Latushkin <Victor.Latushkin at Sun.COM>
>> To: Gabriele Bulfon <gbulfon at sonicle.com>
>> Cc: zfs-discuss at opensolaris.org
>> Subject: Re: [zfs-discuss] ZFS bug - should I be worried about this?
>> Message-ID: <4C28AE34.1030209 at Sun.COM>
>> Content-Type: text/plain; CHARSET=US-ASCII; format=flowed
>>
>> On 28.06.10 16:16, Gabriele Bulfon wrote:
>>> Yes...they''re still running...but being aware that a power
failure causing an
>>> unexpected poweroff may make the pool unreadable is a pain....
>>
>> Pool integrity is not affected by this issue.
>>
>>
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Mon, 28 Jun 2010 07:26:45 -0700
>> From: Frank Cusack <frank+lists/zfs at linetwo.net>
>> To: ''OpenSolaris ZFS discuss'' <zfs-discuss at
opensolaris.org>
>> Subject: Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers
>> Message-ID: <5F1B59775F3FFC0E1781FC4C at cusack.local>
>> Content-Type: text/plain; charset=us-ascii; format=flowed
>>
>> On 6/26/10 9:47 AM -0400 David Magda wrote:
>>> Crickey. Who''s the genius who thinks of these URLs?
>>
>> SEOs
>>
>>
>> ------------------------------
>>
>> Message: 4
>> Date: Mon, 28 Jun 2010 08:17:21 -0700
>> From: "Garrett D''Amore" <garrett at
nexenta.com>
>> To: Gabriele Bulfon <gbulfon at sonicle.com>
>> Cc: zfs-discuss at opensolaris.org
>> Subject: Re: [zfs-discuss] ZFS bug - should I be worried about this?
>> Message-ID: <1277738241.5596.4325.camel at velocity>
>> Content-Type: text/plain; charset="UTF-8"
>>
>> On Mon, 2010-06-28 at 05:16 -0700, Gabriele Bulfon wrote:
>>> Yes...they''re still running...but being aware that a power
failure causing an unexpected poweroff may make the pool unreadable is a
pain....
>>>
>>> Yes. Patches should be available.
>>> Or adoption may be lowering a lot...
>>
>>
>> I don''t have access to the information, but if this problem is
the same
>> one I think it is, then the pool does not become unreadable. Rather,
>> its state after such an event represents a *consistent* state from some
>> point of time *earlier* than that confirmed fsync() (or a write on a
>> file opened with O_SYNC or O_DSYNC).
>>
>> For most users, this is not a critical failing. For users using
>> databases or requiring transactional integrity for data stored on ZFS,
>> then yes, this is a very nasty problem indeed.
>>
>> I suspect that this is the problem I reported earlier in my blog
>> (http://gdamore.blogspot.com) about certain kernels having O_SYNC and
>> O_DSYNC problems. I can''t confirm this though, because I
don''t have
>> access to the SunSolve database to read the report.
>>
>> (This is something I''ll have to check into fixing... it seems
like my
>> employer ought to have access to that information...)
>>
>> - Garrett
>>
>>
>>
>> ------------------------------
>>
>> Message: 5
>> Date: Mon, 28 Jun 2010 08:26:02 PDT
>> From: Tristram Scott <tristram.scott at quantmodels.co.uk>
>> To: zfs-discuss at opensolaris.org
>> Subject: [zfs-discuss] Announce: zfsdump
>> Message-ID: <311835455.361277738793747.JavaMail.Twebapp at
sf-app1>
>> Content-Type: text/plain; charset=UTF-8
>>
>> For quite some time I have been using zfs send -R fsname at snapname |
dd of=/dev/rmt/1ln to make a tape backup of my zfs file system. A few weeks
back the size of the file system grew to larger than would fit on a single DAT72
tape, and I once again searched for a simple solution to allow dumping of a zfs
file system to multiple tapes. Once again I was disappointed...
>>
>> I expect there are plenty of other ways this could have been handled,
but none leapt out at me. I didn''t want to pay large sums of cash for
a commercial backup product, and I didn''t see that Amanda would be an
easy thing to fit into my existing scripts. In particular, (and I could well be
reading this incorrectly) it seems that the commercial products, Amanda, star,
all are dumping the zfs file system file by file (with or without ACLs). I
found none which would allow me to dump the file system and its snapshots,
unless I used zfs send to a scratch disk, and dumped to tape from there. But,
of course, that assumes I have a scratch disk large enough.
>>
>> So, I have implemented zfsdump as a ksh script. The method is as
follows:
>> 1. Make a bunch of fifos.
>> 2. Pipe the stream from zfs send to split, with split writing to the
fifos (in sequence).
>> 3. Use dd to copy from the fifos to tape(s).
>>
>> When the first tape is complete, zfsdump returns. One then calls it
again, specifying that the second tape is to be used, and so on.
>>
>>> From the man page:
>>
>> Example 1. Dump the @Tues snapshot of the tank filesystem
>> to the non-rewinding, non-compressing tape, with a 36GB
>> capacity:
>>
>> zfsdump -z tank at Tues -a "-R" -f /dev/rmt/1ln -s
36864 -t 0
>>
>> For the second tape:
>>
>> zfsdump -z tank at Tues -a "-R" -f /dev/rmt/1ln -s
36864 -t 1
>>
>> If you would like to try it out, download the package from:
>> http://www.quantmodels.co.uk/zfsdump/
>>
>> I have packaged it up, so do the usual pkgadd stuff to install.
>>
>> Please, though, [b]try this out with caution[/b]. Build a few test
file systems, and see that it works for you.
>> [b]It comes without warranty of any kind.[/b]
>>
>>
>> Tristram
>> --
>> This message posted from opensolaris.org
>>
>>
>> ------------------------------
>>
>> Message: 6
>> Date: Mon, 28 Jun 2010 11:51:07 -0400
>> From: Brian Kolaci <Brian.Kolaci at Sun.COM>
>> To: Tristram Scott <tristram.scott at quantmodels.co.uk>
>> Cc: ZFS Discussions <zfs-discuss at opensolaris.org>
>> Subject: Re: [zfs-discuss] Announce: zfsdump
>> Message-ID: <4C28C4EB.2010009 at sun.com>
>> Content-Type: text/plain; CHARSET=US-ASCII; format=flowed
>>
>> I use Bacula which works very well (much better than Amanda did).
>> You may be able to customize it to do direct zfs send/receive, however
I find that although they are great for copying file systems to other machines,
they are inadequate for backups unless you always intend to restore the whole
file system. Most people want to restore a file or directory tree of files, not
a whole file system. In the past 25 years of backups and restores,
I''ve never had to restore a whole file system. I get requests for a
few files, or somebody''s mailbox or somebody''s http document
root.
>> You can directly install it from CSW (or blastwave).
>>
>> On 6/28/2010 11:26 AM, Tristram Scott wrote:
>>> For quite some time I have been using zfs send -R fsname at
snapname | dd of=/dev/rmt/1ln to make a tape backup of my zfs file system. A
few weeks back the size of the file system grew to larger than would fit on a
single DAT72 tape, and I once again searched for a simple solution to allow
dumping of a zfs file system to multiple tapes. Once again I was
disappointed...
>>>
>>> I expect there are plenty of other ways this could have been
handled, but none leapt out at me. I didn''t want to pay large sums of
cash for a commercial backup product, and I didn''t see that Amanda
would be an easy thing to fit into my existing scripts. In particular, (and I
could well be reading this incorrectly) it seems that the commercial products,
Amanda, star, all are dumping the zfs file system file by file (with or without
ACLs). I found none which would allow me to dump the file system and its
snapshots, unless I used zfs send to a scratch disk, and dumped to tape from
there. But, of course, that assumes I have a scratch disk large enough.
>>>
>>> So, I have implemented zfsdump as a ksh script. The method is as
follows:
>>> 1. Make a bunch of fifos.
>>> 2. Pipe the stream from zfs send to split, with split writing to
the fifos (in sequence).
>>> 3. Use dd to copy from the fifos to tape(s).
>>>
>>> When the first tape is complete, zfsdump returns. One then calls
it again, specifying that the second tape is to be used, and so on.
>>>
>>> From the man page:
>>>
>>> Example 1. Dump the @Tues snapshot of the tank filesystem
>>> to the non-rewinding, non-compressing tape, with a 36GB
>>> capacity:
>>>
>>> zfsdump -z tank at Tues -a "-R" -f /dev/rmt/1ln
-s 36864 -t 0
>>>
>>> For the second tape:
>>>
>>> zfsdump -z tank at Tues -a "-R" -f /dev/rmt/1ln
-s 36864 -t 1
>>>
>>> If you would like to try it out, download the package from:
>>> http://www.quantmodels.co.uk/zfsdump/
>>>
>>> I have packaged it up, so do the usual pkgadd stuff to install.
>>>
>>> Please, though, [b]try this out with caution[/b]. Build a few test
file systems, and see that it works for you.
>>> [b]It comes without warranty of any kind.[/b]
>>>
>>>
>>> Tristram
>>
>>
>>
>> ------------------------------
>>
>> Message: 7
>> Date: Mon, 28 Jun 2010 08:59:21 PDT
>> From: Andrew Jones <andrewnjones at gmail.com>
>> To: zfs-discuss at opensolaris.org
>> Subject: Re: [zfs-discuss] zpool import hangs indefinitely (retry post
>> in parts; too long?)
>> Message-ID: <185781013.401277740792036.JavaMail.Twebapp at
sf-app1>
>> Content-Type: text/plain; charset=UTF-8
>>
>> Now at 36 hours since zdb process start and:
>>
>>
>> PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
>> 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
>>
>> Idling at 0.2% processor for nearly the past 24 hours... feels very
stuck. Thoughts on how to determine where and why?
>> --
>> This message posted from opensolaris.org
>>
>>
>> ------------------------------
>>
>> Message: 8
>> Date: Mon, 28 Jun 2010 09:26:12 PDT
>> From: Tristram Scott <tristram.scott at quantmodels.co.uk>
>> To: zfs-discuss at opensolaris.org
>> Subject: Re: [zfs-discuss] Announce: zfsdump
>> Message-ID: <1916741256.441277742403278.JavaMail.Twebapp at
sf-app1>
>> Content-Type: text/plain; charset=UTF-8
>>
>>> I use Bacula which works very well (much better than
>>> Amanda did).
>>> You may be able to customize it to do direct zfs
>>> send/receive, however I find that although they are
>>> great for copying file systems to other machines,
>>> they are inadequate for backups unless you always
>>> intend to restore the whole file system. Most people
>>> want to restore a file or directory tree of files,
>>> not a whole file system. In the past 25 years of
>>> backups and restores, I''ve never had to restore a
>>> whole file system. I get requests for a few files,
>>> or somebody''s mailbox or somebody''s http document
>>> root.
>>> You can directly install it from CSW (or blastwave).
>>
>> Thanks for your comments, Brian. I should look at Bacula in more
detail.
>>
>> As for full restore versus ad hoc requests for files I just deleted, my
experience is mostly similar to yours, although I have had need for full system
restore more than once.
>>
>> For the restore of a few files here and there, I believe this is now
well handled with zfs snapshots. I have always found these requests to be down
to human actions. The need for full system restore has (almost) always been
hardware failure.
>>
>> If the file was there an hour ago, or yesterday, or last week, or last
month, then we have it in a snapshot.
>>
>> If the disk died horribly during a power outage (grrr!) then it would
be very nice to be able to restore not only the full file system, but also the
snapshots too. The only way I know of achieving that is by using zfs send etc.
>>
>>>
>>> On 6/28/2010 11:26 AM, Tristram Scott wrote:
>> [snip]
>>
>>>>
>>>> Tristram
>>>
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss at opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
>>> ss
>>>
>> --
>> This message posted from opensolaris.org
>>
>>
>> ------------------------------
>>
>> Message: 9
>> Date: Mon, 28 Jun 2010 13:00:06 -0400
>> From: Brian Kolaci <Brian.Kolaci at Sun.COM>
>> To: Tristram Scott <tristram.scott at quantmodels.co.uk>
>> Cc: zfs-discuss at opensolaris.org
>> Subject: Re: [zfs-discuss] Announce: zfsdump
>> Message-ID: <EB3A216E-DFD0-40FA-932C-65A8AEE4F296 at Sun.com>
>> Content-Type: text/plain; CHARSET=US-ASCII
>>
>>
>> On Jun 28, 2010, at 12:26 PM, Tristram Scott wrote:
>>
>>>> I use Bacula which works very well (much better than
>>>> Amanda did).
>>>> You may be able to customize it to do direct zfs
>>>> send/receive, however I find that although they are
>>>> great for copying file systems to other machines,
>>>> they are inadequate for backups unless you always
>>>> intend to restore the whole file system. Most people
>>>> want to restore a file or directory tree of files,
>>>> not a whole file system. In the past 25 years of
>>>> backups and restores, I''ve never had to restore a
>>>> whole file system. I get requests for a few files,
>>>> or somebody''s mailbox or somebody''s http
document
>>>> root.
>>>> You can directly install it from CSW (or blastwave).
>>>
>>> Thanks for your comments, Brian. I should look at Bacula in more
detail.
>>>
>>> As for full restore versus ad hoc requests for files I just
deleted, my experience is mostly similar to yours, although I have had need for
full system restore more than once.
>>>
>>> For the restore of a few files here and there, I believe this is
now well handled with zfs snapshots. I have always found these requests to be
down to human actions. The need for full system restore has (almost) always
been hardware failure.
>>>
>>> If the file was there an hour ago, or yesterday, or last week, or
last month, then we have it in a snapshot.
>>>
>>> If the disk died horribly during a power outage (grrr!) then it
would be very nice to be able to restore not only the full file system, but also
the snapshots too. The only way I know of achieving that is by using zfs send
etc.
>>>
>>
>> I like snapshots when I''m making a major change to the system
or for cloning. So to me, snapshots are good for transaction based operations.
Such as stopping & flushing a database, take a snapshot, then resume the
database. Then you can back up the snapshot with Bacula and destroy the
snapshot when the backup is complete. I have Bacula configured with a
pre-backup and post-backup scripts to do just that. When you do the restore, it
will create something that "looks" like a snapshot from the file
system perspective, but isn''t really one.
>>
>> But if you''re looking for a copy of a file from a specific
date, Bacula retains that. In fact you specify the retention period you want
and you''ll have access to any/all individual files on a per date basis.
You can retain the files for months or years if you like, and you specify that
in the Bacula config file as to how long you want to keep the tapes around. So
it really comes down to your use-case.
>>
>>
>>
>>
>> ------------------------------
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>
>> End of zfs-discuss Digest, Vol 56, Issue 126
>> ********************************************
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Bob Friesenhahn
bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/