Displaying 20 results from an estimated 3000 matches similar to: "ZFS, block device and Xen?"
2007 Jan 29
3
dumpadm and using dumpfile on zfs?
Hi All,
I''d like to set up dumping to a file. This file is on a mirrored pool
using zfs. It seems that the dump setup doesn''t work with zfs. This
worked for both a standard UFS slice and a SVM mirror using zfs.
Is there something that I''m doing wrong, or is this not yet supported on
ZFS?
Note this is Solaris 10 Update 3, but I don''t think that should
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me.
For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be
less than the product of used and compressratio?
For example,
# zfs get -p all home1/home1mm01
NAME PROPERTY VALUE SOURCE
home1/home1mm01 type volume -
home1/home1mm01 creation 1254440045 -
home1/home1mm01 used 14902492672
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill
2009 Sep 10
3
zfs send of a cloned zvol
Hi,
I have a question, let''s say I have a zvol named vol1 which is a clone of a snapshot of another zvol (its origin property is tank/myvol at mysnap).
If I send this zvol to a different zpool through a zfs send does it send the origin too that is, does an automatic promotion happen or do I end up whith a broken zvol?
Best regards.
Maurilio.
--
This message posted from
2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives:
fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME UUID SUNW-MSG-ID
Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux
ext3 over iSCSI to zvols, especially with small writes. Does running
a journalled filesystem to a zvol turn the block storage into swiss
cheese? I am considering serving ext3 journals (and possibly swap
too) off a raw, hardware-mirrored device. Before I do (and I''ll
write up any results) I''d like to know
2010 Jan 19
8
Panic running a scrub
This is probably unreproducible, but I just got a panic whilst
scrubbing a simple mirrored pool on scxe snv124. Evidently
on of the disks went offline for some reason and shortly
thereafter the panic happened. I have the dump and the
/var/adm/messages containing the trace.
Is there any point in submitting a bug report?
The panic starts with:
Jan 19 13:27:13 host6
2006 Sep 06
2
creating zvols in a non-global zone (or ''Doctor, it hurts when I do this'')
A colleague just asked if zfs delegation worked with zvols too.
Thought I''d give it a go and got myself in a mess
(tank/linkfixer is the delegated dataset):
root at non-global / # zfs create -V 500M tank/linkfixer/foo
cannot create device links for ''tank/linkfixer/foo'': permission denied
cannot create ''tank/linkfixer/foo'': permission denied
Ok, so
2009 Jun 08
4
[caiman-discuss] Can not delete swap on AI sparc
Hi Richard,
Richard Robinson wrote:
> I should add that I also used truss and saw the same ENOMEM error. I am on a 4Gb system with swap -l reporting
>
> swapfile dev swaplo blocks free
> /dev/zvol/dsk/rpool/swap 181,1 8 4194296 4194296
>
> and I was trying to follow the directions for increasing swap here:
>
2009 Jun 29
7
ZFS - SWAP and lucreate..
Good morning everybody
I was migrating my ufs ? rootfilesystem to a zfs ? one, but was a little upset finding out that it became bigger (what was clear because of the swap and dump size).
Now I am questioning myself if it is possible to set the swap and dump size by using the lucreate ? command (I wanna try it again but on less space). Unfortunately I did not find any advice in manpages.
2006 Jan 04
8
Using same ZFS under different kernel versions
I build two zfs filesystems using b29 (from brandz).
I then re-installed solaris express b28, preserving the zfs filesystems.
When I tried to "zpool import" my zfs filesystems I got a kernel panic:
> debugging crash dump vmcore.0 (32-bit) from blackbird
> operating system: 5.11 snv_28 (i86pc)
> panic message:
> ZFS: bad checksum (read on /dev/dsk/c1d0p0 off 24d5e000: zio
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list,
someone (actually neil perrin (CC)) mentioned in this thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html
that is should be possible to import a pool with failed log devices
(with or without data loss ?).
>/
/>/ Has the following error no consequences?
/>/
/>/ Bug ID 6538021
/>/ Synopsis Need a way to force pool startup when
2009 Mar 03
8
zfs list extentions related to pNFS
Hi,
I am soliciting input from the ZFS engineers and/or ZFS users on an
extension to "zfs list". Thanks in advance for your feedback.
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding
a new DMU object set type which is used on the pNFS data server to
store pNFS stripe DMU objects. A pNFS dataset gets created with the
"zfs
2007 Apr 16
10
zfs send/receive question
Hello folks, I have a question and a small problem... I did try to replicate my
zfs with all the snaps, so I did run few commands:
time zfs send mypool/d at 2006_month_10 | zfs receive mypool2/d at 2006_month_10
real 6h35m12.34s
user 0m0.00s
sys 29m32.28s
zfs send -i mypool/d at 2006_month_10 mypool/d at 2006_month_12 | zfs receive mypool/d at 2006_month_12
real 4h49m27.54s
user
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
??
??????? Original Message ???????
On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
>
> On 04/09/2018 04:36 PM, mabi wrote:
>
> >
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38
2008 Mar 20
5
Snapshots silently eating user quota
All,
I assume this issue is pretty old given the time ZFS has been around. I have
tried searching the list but could not get understand the structure of how
ZFS actually takes snapshot space into account.
I have a user walter on whom I try to do the following ZFS operations
bash-3.00# zfs get quota store/catB/home/walter
NAME PROPERTY VALUE SOURCE
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09