similar to: do we support zonepath on UFS formated ZFS volume

Displaying 20 results from an estimated 1200 matches similar to: "do we support zonepath on UFS formated ZFS volume"

2007 Oct 08
2
safe zfs-level snapshots with a UFS-on-ZVOL filesystem?
I had some trouble installing a zone on ZFS with S10u4 (bug in the postgres packages) that went away when I used a ZVOL-backed UFS filesystem for the zonepath. I thought I''d push on with the experiment (in the hope Live Upgrade would be able to upgrade such a zone). It''s a bit unwieldy, but everything worked reasonably well - performance isn''t much worse than straight
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone
2006 Oct 01
1
Crossbow and zones
Howdy, I just finished reading through the Crossbow presentation: http://blogs.sun.com/sunay/resource/crossbow.pdf And have one question. If you create a virtual NIC with dladm: $ dladm create-vnic -d bge0 -m 0:1:2:3:4:5 -b 10000 1 Can you then add vnic1 directly to the zone? e.g.: zonecfg -z zone1 zonecfg:zone1> create zonecfg:zone1> set zonepath=/zones/zone1 zonecfg:zone1> add net
2006 Oct 31
0
6241028 metaimport doesn''t detect partial diskset if missing disk doesn''t have a mddb on it
Author: jeanm Repository: /hg/zfs-crypto/gate Revision: cfe2171ad33aa1cb081f4141b11c75bbbc049b7c Log message: 6241028 metaimport doesn''t detect partial diskset if missing disk doesn''t have a mddb on it Files: update: usr/src/cmd/lvm/util/metaimport.c update: usr/src/lib/lvm/libmeta/common/meta_import.c
2005 Nov 17
2
zpool iostat question
Hello ZFSland, Is there any significance in the fact that the bandwidth/read figures for a simple cpio into a ZFS filesystem should be multiples of 21.3K (when non-zero) as follows? What could determine this figure? Do I need to read a manpage? ;-) Thanks... Sean. ----- [root at global:/36g2] # zpool iostat 3 capacity operations bandwidth pool used avail read
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2008 Oct 16
1
attaching 2nd vol unsupported?
Hi, im trying to attach another volume aka disk to win HVM, however it doesn''t seem to work: + xm block-attach win2008ss phy:/dev/zvol/dsk/pool1/win2008ss.dsk2 \ hdd:disk w 0 results in: elkner.sol ~ > + xm block-list win2008ss --long (0 (vbd (uuid 7cb8fadf-619d-dde6-bda9-dcc18023c7d5) (bootable 1) (devid 768) (driver paravirtualised)
2006 Jun 12
2
?: zfs mv within pool seems slow
I have just upgraded my jumpstart server to S10 u2 b9a. It is an Ultra 10 with two 120GB EIDE drives. The second drive (disk1) is new, and has u2b9a installed on a slice, with most of the space in slice 7 for the ZFS pool I created pool1 on disk1, and created the filesystem pool1/ro (for legacy reasons). I them moved my data from the original disk0 UFS file system to pool1/ro. Initially I
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All, I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2009 Jan 12
1
ZFS size is different ?
Hi all, I have 2 questions about ZFS. 1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different: NAME USED AVAIL REFER MOUNTPOINT pool2/data2 160G 1.44T 159G /pool2/data2 pool1/data 176G 638G 175G /pool1/data1 It keep about 30,000,000 files. The content of p_pool/p1 and backup/p_backup
2010 May 05
0
Migrating ZFS/data pool to new pool on the same system
Can anyone confirm my action plan is the proper way to do this? The reason I''m doing this is I want to create 2xraidz2 pools instead of expanding my current 2xraidz1 pool. So I''ll create a 1xraidz2 vdev, migrate my current 2xraidz1 pool over, destroy that pool and then add it as a 1xraidz2 vdev to the new pool. I''m running b130, sharing both with CIFS and ISCSI (not
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > I applied the workarround for this bug and now df shows the right size: > > That is good to hear. > [root at stor1 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 > /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > My initial setup was composed of 2 similar nodes: stor1data and stor2data. > A month ago I expanded both volumes with a new node: stor3data (2 bricks > per volume). > Of course, then to add the new peer with the bricks I did the 'balance > force' operation.
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message. Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- -----------
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, I applied the workarround for this bug and now df shows the right size: [root at stor1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1 stor1data:/volumedisk0 101T 3,3T 97T 4% /volumedisk0 stor1data:/volumedisk1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume). Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar . For volumedisk1 I
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------
2009 Feb 25
7
Solaris 8/9 branded zones on ZFS root?
Hi all, I have a situation where I need to consolidate a few servers running Solaris 9 and 8. If the application doesn''t run natively on Solaris 10 or Nevada, I was thinking of using Solars 9 or 8 branded zones. My intent would be for the global zone to use ZFS boot/root; would I be correct in thinking that this will be OK for the branded zones? That is, they don''t care about
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. The "shared-brick-count" values seem fine on stor1. Please send us "grep -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes so we can check if they are the cause. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2011 Jul 20
2
how to add file-based disk space to a guest
hi there, I'm following these documentations to add a file-based disk volume to a KVM guest under Centos 6.0 : http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization/chap-Virtualization-Storage_Volumes.html as instructed, I created a "pool" then a "volume", file-based, e.g : mkdir /mnt/raid/kvm_pool1 virsh # pool-define-as pool1 dir - - - -