Displaying 20 results from an estimated 30000 matches similar to: "corrupt zfs stream? checksum mismatch"
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
ormandj at neutron.corenode.com:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was
2009 Mar 03
8
zfs list extentions related to pNFS
Hi,
I am soliciting input from the ZFS engineers and/or ZFS users on an
extension to "zfs list". Thanks in advance for your feedback.
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding
a new DMU object set type which is used on the pNFS data server to
store pNFS stripe DMU objects. A pNFS dataset gets created with the
"zfs
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file
zfs snapshot -r rpool at 0908
zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908
INCREMENTAL backup to a file
zfs snapshot -i rpool at 0908 rpool at 090822
zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822
As I understand the latter gives a file with changes between 0908 and
090822. Is this correct?
How do I restore those files? I know
2008 Aug 08
1
[install-discuss] lucreate into New ZFS pool
Hello,
Since I''ve got my disk partitioning sorted out now, I want to move my BE
from the old disk to the new disk.
I created a new zpool, named RPOOL for distinction with the existing
"rpool".
I then did lucreate -p RPOOL -n new95
This completed without error, the log is at the bottom of this mail.
I have not yet dared to run luactivate. I also have not yet dared set the
2009 Dec 11
7
Doing ZFS rollback with preserving later created clones/snapshot?
Hi.
Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot,
WITHOUT destroying later created clones or snapshots?
Example:
--($ ~)-- sudo zfs snapshot rpool/ROOT at 01
--($ ~)-- sudo zfs snapshot rpool/ROOT at 02
--($ ~)-- sudo zfs clone rpool/ROOT at 02 rpool/ROOT-02
--($ ~)-- LC_ALL=C sudo zfs rollback rpool/ROOT at 01
cannot rollback to ''rpool/ROOT at 01'': more
2008 Jul 09
8
Using zfs boot with MPxIO on T2000
Here is what I have configured:
T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root disks
OpenSolaris Nevada Build 91
Solaris Express Community Edition snv_91 SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 03 June 2008
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All,
I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2011 Nov 22
3
SUMMARY: mounting datasets from a read-only pool with aid of tmpfs
Hello all,
I''d like to report a tricky situation and a workaround
I''ve found useful - hope this helps someone in similar
situations.
To cut the long story short, I could not properly mount
some datasets from a readonly pool, which had a non-"legacy"
mountpoint attribute value set, but the mountpoint was not
available (directory absent or not empty). In this case
2009 Apr 19
21
[on-discuss] Reliability at power failure?
Casper.Dik at Sun.COM wrote:
>
> I would suggest that you follow my recipe: not check the boot-archive
> during a reboot. And then report back. (I''m assuming that that will take
> several weeks)
>
We are back at square one; or, at the subject line.
I did a zpool status -v, everything was hunky dory.
Next, a power failure, 2 hours later, and this is what zpool status
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev"
device, I did a test which made a disk unavailable -- all attempts to
read from it report EIO.
I expected my configuration (which is a 3 disk test, with 2 disks in a
RAIDZ and a hot spare) to work where the hot spare would automatically
be activated. But I''m finding that ZFS does not behave this way
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of
zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt
The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,
2010 Apr 26
2
How to delegate zfs snapshot destroy to users?
Hi,
I''m trying to let zfs users to create and destroy snapshots in their zfs
filesystems.
So rpool/vm has the permissions:
osol137 19:07 ~: zfs allow rpool/vm
---- Permissions on rpool/vm -----------------------------------------
Permission sets:
@virtual clone,create,destroy,mount,promote,readonly,receive,rename,rollback,send,share,snapshot,userprop
Create time permissions:
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello,
I have a problem confusing me. I hope someone can help me with it.
I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines.
Commands (for completion):
[i]zfs create rpool/vms[/i]
[i]zfs create rpool/vms/vm1[/i]
[i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i]
This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2010 Mar 13
3
When to Scrub..... ZFS That Is
When would it be necessary to scrub a ZFS filesystem?
We have many "rpool", "datapool", and a NAS 7130, would you consider to
schedule monthly scrubs at off-peak hours or is it really necessary?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2008 Jul 12
2
sharenfs=off, but still being shared?
I noticed an oddity on my 2008.05 box today.
Created a new zfs file system that I was planning to nfs share out to an old FreeBSD box, after I put sharenfs=on for it, I noticed there was a bunch of others shared too:
-bash-3.2# dfshares -F nfs
RESOURCE SERVER ACCESS TRANSPORT
reaver:/store/movies reaver - -
reaver:/export
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all,
I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny.
I installed default snv_129, installed guest additions -> reboot, set
2010 Jul 28
4
zfs allow does not work for rpool
I am trying to give a general user permissions to create zfs filesystems in the rpool.
zpool set=delegation=on rpool
zfs allow <user> create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test : permission denied.
Can you not allow to the rpool?
--
This message posted from opensolaris.org
2011 Feb 18
2
time-sliderd doesn''t remove snapshots
In the last few days my performance has gone to hell. I''m running:
# uname -a
SunOS nissan 5.11 snv_150 i86pc i386 i86pc
(I''ll upgrade as soon as the desktop hang bug is fixed.)
The performance problems seem to be due to excessive I/O on the main
disk/pool.
The only things I''ve changed recently is that I''ve created and destroyed
a snapshot, and I used