Displaying 20 results from an estimated 200 matches similar to: "problem mounting one of the zfs file system during boot"
2007 Apr 16
10
zfs send/receive question
Hello folks, I have a question and a small problem... I did try to replicate my
zfs with all the snaps, so I did run few commands:
time zfs send mypool/d at 2006_month_10 | zfs receive mypool2/d at 2006_month_10
real 6h35m12.34s
user 0m0.00s
sys 29m32.28s
zfs send -i mypool/d at 2006_month_10 mypool/d at 2006_month_12 | zfs receive mypool/d at 2006_month_12
real 4h49m27.54s
user
2007 Apr 14
3
zfs snaps and removing some files
Hello folks, I have strange and unusual request...
I have two 300gig drives mirrored:
[11:33:22] root at chrysek: /d/d2 > zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
2010 May 31
3
zfs permanent errors in a clone
$ zfs list -t filesystem
NAME USED AVAIL REFER MOUNTPOINT
datapool 840M 25.5G 21K /datapool
datapool/virtualbox 839M 25.5G 839M /virtualbox
mypool 8.83G 6.92G 82K /mypool
mypool/ROOT 5.48G 6.92G 21K legacy
mypool/ROOT/May25-2010-Image-Update
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
On 29/10/2007, Tek Bahadur Limbu <teklimbu at wlink.com.np> wrote:
> I created a ZFS file system like the following with /mypool/cache being
> the partition for the Squid cache:
>
> 18:51:27 root at solaris:~$ zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> mypool 478M 31.0G 10.0M /mypool
> mypool/cache 230M 9.78G 230M
2006 Sep 15
1
[Blade 150] ZFS: extreme low performance
Hi forum,
I''m currently a little playing around with ZFS on my workstation.
I created a standard mirrored pool over 2 disk-slices.
# zpool status
Pool: mypool
Status: ONLINE
scrub: Keine erforderlich
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t0d0s4 ONLINE
2008 Jun 07
4
Mixing RAID levels in a pool
Hi,
I had a plan to set up a zfs pool with different raid levels but I ran
into an issue based on some testing I''ve done in a VM. I have 3x 750
GB hard drives and 2x 320 GB hard drives available, and I want to set
up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to
the same pool.
I tested detaching a drive and it seems to seriously mess up the
entire pool and I
2010 May 01
7
Virtual to physical migration
I had created a virtualbox VM to test out opensolaris. I updated to latest dev build and set my things up. Tested pools and various configs/commands. Learnt format/partition etc.
And then, I wanted to move this stuff to a solaris partition on the physical disk. VB provides physical disk access. I put my solaris partition in there and created slices for root, swap etc. in that partition inside the
2008 Jan 07
0
CR 6647661 <User 1-5Q-12446>, Now responsible engineer P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset
*Synopsis*: "set once" / "create time only" properties can''t be set for pool level dataset
Due to a change requested by <User 1-5Q-12446>,
<User 1-5Q-12446> is now the responsible engineer for:
CR 6647661 changed on Jan 7 2008 by <User 1-5Q-12446>
=== Field ============ === New Value ============= === Old Value =============
Responsible Engineer
2009 Apr 15
0
CR 6647661 Updated, P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset
*Synopsis*: "set once" / "create time only" properties can''t be set for pool level dataset
CR 6647661 changed on Apr 15 2009 by <User 1-ERV-6>
=== Field ============ === New Value ============= === Old Value =============
See Also 6828754
====================== ===========================
2007 Jun 15
3
Virtual IP Integration
Has there been any discussion here about the idea integrating a virtual IP into ZFS. It makes sense to me because of the integration of NFS and iSCSI with the sharenfs and shareiscsi properties. Since these are both dependent on an IP it would be pretty cool if there was also a virtual IP that would automatically move with the pool.
Maybe something like "zfs set ip.nge0=x.x.x.x mypool"
2007 Oct 29
9
zpool question
hello folks, I am running Solaris 10 U3 and I have small problem that I dont
know how to fix...
I had a pool of two drives:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
emcpower0a ONLINE 0 0 0
emcpower1a ONLINE
2008 Dec 17
10
Cannot remove a file on a GOOD ZFS filesystem
Hello all,
First off, i''m talking about a SXDE build 89. Sorry if that was discussed here before, but i did not find anything related on the archives, and i think is a "weird" issue...
If i try to remove a specific file, i got:
# rm file1
rm: file1: No such file or directory
# rm -rf dir2
rm: Unable to remove directory dir2: Directory not empty
Take a look:
------- cut
2008 Mar 10
2
[Bug 701] New: ''zpool create -o keysource='' fails on sparc - invalid argument
http://defect.opensolaris.org/bz/show_bug.cgi?id=701
Summary: ''zpool create -o keysource='' fails on sparc - invalid
argument
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: SPARC/sun4u
OS/Version: Solaris
Status: NEW
Severity: minor
Priority:
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2008 Jan 24
1
zfs showing more filesystem using ls than df actually has
Platform T2000
SunOS ccluatdwunix1 5.10 Generic_125100-10 sun4v sparc SUNW,Sun-Fire-T200
I have a user that stated zfs is allocating more file system space than
actually available via ls command versus what df -k shows.
He stated he used the mkfile to verify if ZFS quota was working.
He executes "ls -s" to report usage which reports more allocated than
available from "df
2013 Jun 07
2
Setting RBD cache parameters for libvirt+qemu
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change:
[libvirt] [PATCH] Forbid use of ':' in RBD pool names
...People are known to be abusing the lack of escaping in current libvirt to pass arbitrary args to QEMU.
I am one of those
2013 Jun 07
1
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote:
> On 06/07/2013 02:41 PM, John Nielsen wrote:
>> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change:
>> [libvirt] [PATCH] Forbid use of ':'
2010 Mar 19
0
zpool import problem
Hello All,
I have some problem with the import of pools.
On the source system the pools are configured with emcpower devices on slice
2 (emcpower1c)
zpool create mypool emcpower1c
When i try to do an import on another hosts with mpxio enabled i get this
result:
pool: ora_system.2
id: 9755850482304172097
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool
2013 Jun 07
0
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On 06/07/2013 02:41 PM, John Nielsen wrote:
> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change:
> [libvirt] [PATCH] Forbid use of ':' in RBD pool names
> ...People are known to be abusing the lack of escaping in current