Displaying 20 results from an estimated 900 matches similar to: "Porting ZFS, trouble with nvpair"
2010 Mar 02
2
dedup source code
Hello ZFS experts:
I would like to study ZFS de-duplication feature. Can someone please let me know which directory/files I should be looking at?
Thanks in advance.
--
This message posted from opensolaris.org
2011 Jul 14
1
mount.ocfs2: Invalid argument while mounting /dev/mapper/xenconfig_part1 on /etc/xen/vm/. Check 'dmesg' for more information on this error.
Hello,
this is my scenario:
1)I've created a Pacemaker cluster with the following ocfs package on opensuse
11.3 64bit
ocfs2console-1.8.0-2.1.x86_64
ocfs2-tools-o2cb-1.8.0-2.1.x86_64
ocfs2-tools-1.8.0-2.1.x86_64
2)I've configured the cluster as usual :
<resources>
<clone id="dlm-clone">
<meta_attributes id="dlm-clone-meta_attributes">
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub
diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk
--- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100
+++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400
@@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk
CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2010 Jun 11
9
Are recursive snapshot destroy and rename atomic too?
In another thread recursive snapshot creation was found atomic so that
it is done quickly, and more important, all at once or nothing at all.
Do you know if recursive destroying and renaming of snapshots are atomic too?
Regards
Henrik Heino
2009 Nov 11
0
libzfs zfs_create() fails on sun4u daily bits (daily.1110)
I encountered a strange libzfs behavior while testing a zone fix and
want to make sure that I found a genuine bug. I''m creating zones whose
zonepaths reside in ZFS datasets (i.e., the parent directories of the
zones'' zonepaths are ZFS datasets). In this scenario, zoneadm(1M)
attempts to create ZFS datasets for zonepaths. zoneadm(1M) has done
this for a long time (since
2010 Aug 20
1
ocfs2 hang writing until reboot the cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012
Hello,
I hope this mailing list is correct.
I've a cluster pacemaker with a clone resource ocfs2 with
ocfs2-tools-1.4.1-25.6.x86_64
ocfs2-tools-o2cb-1.4.1-25.6.x86_64
on Opensuse 11.2
After some network problem on my switch I receive on one of 4 nodes of
my cluster the following messages
Aug 18 13:12:28 nodo1 openais[8462]: [TOTEM] The token was lost in the
OPERATIONAL state.
Aug 18 13:12:28
2011 May 03
4
multipl disk failures cause zpool hang
Hi,
There seems to be a few threads about zpool hang, do we have a
workaround to resolve the hang issue without rebooting ?
In my case, I have a pool with disks from external LUNs via a fiber
cable. When the cable is unplugged while there is IO in the pool,
All zpool related command hang (zpool status, zpool list, etc.), put the
cable back does not solve the problem.
Eventually, I
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it identify itself as:
Seagate-External-SG11-2.73TB
Under both Solaris 10 and Solaris 11x, I receive the evil message:
| I/O request is not aligned with 4096 disk sector size.
| It is handled through Read Modify Write but the performance
2006 Jun 19
0
snv_42 zfs/zpool dump core and kernel/fs/zfs won''t load.
I''m pretty sure this is my fault but I need some help in fixing the system.
It was installed at one point with snv_29 with the pre integration
SUNWzfs package. I did a live upgrade to snv_42 but forgot to remove
the old SUNWzfs before I did so. When the system booted up got
complaints about kstat install because I still had an old zpool kernel
module lying around.
So I did pkgrm
2008 Dec 26
19
separate home "partition"?
(i use the term loosely because i know that zfs likes whole volumes better)
when installing ubuntu, i got in the habit of using a separate partition for my home directory so that my data and gnome settings would all remain intact when i reinstalled or upgraded.
i''m running osol 2008.11 on an ultra 20, which has only two drives. i''ve got all my data located in my home directory,
2012 Jul 03
1
buildworld fails with clang
Hello,
9-STABLE fails to build with clang and *without* "NO_WERROR=" and
"WERROR=" in /etc/make.conf. It used to work not long before :
FreeBSD zozo.afpicl.lan 9.0-STABLE FreeBSD 9.0-STABLE #0 r237222M: Mon
Jun 18 10:18:54 CEST 2012
root@zozo.afpicl.lan:/usr/obj/usr/src/sys/CORE amd64
# svnversion
238067M
# make NOCLEAN=yes NO_CLEAN=yes buildworld
[...]
===> cddl/lib
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code,
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c
72 * All i/os smaller than zfs_vdev_cache_max will be turned into
73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software
74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each
75 * vdev''s vdev_cache.
While it
2007 Apr 19
9
ZFS disables nfs/server on a host
I have an Ultra 10 client running Sol10 U3 that has a zfs pool set up on the extra space of the internal ide disk. There''s just the one fs and it is shared with the sharenfs property. When this system reboots nfs/server ends up getting disabled and this is the error from the SMF logs:
[ Apr 16 08:41:22 Executing start method ("/lib/svc/method/nfs-server start") ]
[ Apr 16
2009 Feb 18
4
tracing aio syscalls
Hi all,
Is there some documentation or some example on how to interpret the arg0
.. arg<n> for the aioread, aiowrite, aiowait syscalls? The system call
name for all three seems to be "kaio".
Michael
=== Michael Mueller ==================
Tel. + 49 8171 63600
Fax. + 49 8171 63615
Web: http://www.michael-mueller-it.de
======================================
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
Hello.
I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of weeks ago).
Without no obvious reason (at least for me), after an uptime of 1 to 2 days (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it back to 0. ARC is being shrunk to less than 1 GB -- needless to say that performance is terrible. There is not much load on this system.
Memory
2007 Apr 19
3
Using dtrace to snoop messages between two Streams modules
I''m working on a case where customer has a 3rd party streams
driver/module, called uplink, which sits over Sun''s ce driver. This 3rd
party module is used by the telco to perform telco grade NIC failover.
The customer was given an IDR ce driver, to avoid a panic they were
given. The IDR driver was successful in avoiding the panic, but now the
customer is getting many
2011 Sep 12
1
glusterfs, pacemaker and Filesystem RA
Hello List
due to a mistake my post from yesterday has been cut. This is why I try to
send my post again and open it as new thread. I hope it will work this time.
<---- Original postet mail starts here ---->
Hello Marcel, hello Samuel,
sorry for my late answer, but I was away for two months and for that I could
continue my tests last week.
First of all thank you for your patch of the
2007 Jul 26
4
Does iSCSI target support SCSI-3 PGR reservation ?
Does opensolaris iSCSI target support SCSI-3 PGR reservation ?
My goal is to use the iSCSI LUN created by [1] or [2] as a quorum device for a 3-node suncluster.
[1] zfs set shareiscsi=on <storage-pool/zfs volume name>
[2] iscsitadm create target .....
Thanks,
-- leon
This message posted from opensolaris.org
2007 May 29
6
Deterioration with zfs performace and recent zfs bits?
Has anyone else noticed a significant zfs performance deterioration
when running recent opensolaris bits?
My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a
full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow
compilation disabled; using an lzjb compressed zpool / zfs on a
single notebook hdd p-ata drive).
After upgrading to 2007-05-25 opensolaris release bits
2006 Mar 03
5
flag day: ZFS on-disk format change
Summary: If you use ZFS, do not downgrade from build 35 or later to
build 34 or earlier.
This putback (into Solaris Nevada build 35) introduced a backwards-
compatable change to the ZFS on-disk format. Old pools will be
seamlessly accessed by the new code; you do not need to do anything
special.
However, do *not* downgrade from build 35 or later to build 34 or
earlier. If you do so, some of