similar to: CR 6647661 <User 1-5Q-12446>, Now responsible engineer P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset

Displaying 20 results from an estimated 400 matches similar to: "CR 6647661 <User 1-5Q-12446>, Now responsible engineer P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset"

2009 Apr 15
0
CR 6647661 Updated, P2 kernel/zfs "set once" / "create time only" properties can''t be set for pool level dataset
*Synopsis*: "set once" / "create time only" properties can''t be set for pool level dataset CR 6647661 changed on Apr 15 2009 by <User 1-ERV-6> === Field ============ === New Value ============= === Old Value ============= See Also 6828754 ====================== ===========================
2004 Sep 24
0
Function sort.data.frame
I can never remember how to use "order" to sort the rows of a data frame, so like any good, lazy programmer, I decided to write my own function. The idea is to specify a data.frame and a one-sided formula with +/- indicating ascending/descending. For example: sort.data.frame(~ +nitro -Variety, Oats) Since sorting of a data.frame is an oft-asked question on this list, I am posting
2009 Oct 13
2
General means of matching a color specification to an official R color name
Hello List Dwellers: I?ve looked around quite a bit, but don?t quite see an answer that I understand. I?m looking for a way to take any kind of color specification (rgb, hsv, hcl, hex) and match it to the n-nearest R official color names. Clearly it is easy to interconvert different specification schemes and color spaces, but matching to the name seems a bit trickier. Seems like if one has a
2010 May 31
3
zfs permanent errors in a clone
$ zfs list -t filesystem NAME USED AVAIL REFER MOUNTPOINT datapool 840M 25.5G 21K /datapool datapool/virtualbox 839M 25.5G 839M /virtualbox mypool 8.83G 6.92G 82K /mypool mypool/ROOT 5.48G 6.92G 21K legacy mypool/ROOT/May25-2010-Image-Update
2007 Apr 14
3
zfs snaps and removing some files
Hello folks, I have strange and unusual request... I have two 300gig drives mirrored: [11:33:22] root at chrysek: /d/d2 > zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0
2007 Apr 20
0
problem mounting one of the zfs file system during boot
hello everyone, I have strange issue and I am not sure why is this happening. syncing file systems... done rebooting... SC Alert: Host System has Reset Probing system devices Probing memory Probing I/O buses Sun Fire V240, No Keyboard Copyright 2006 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.22.19, 8192 MB memory installed, Serial #65031515. Ethernet address 0:3:ba:e0:4d:5b, Host
2007 Apr 16
10
zfs send/receive question
Hello folks, I have a question and a small problem... I did try to replicate my zfs with all the snaps, so I did run few commands: time zfs send mypool/d at 2006_month_10 | zfs receive mypool2/d at 2006_month_10 real 6h35m12.34s user 0m0.00s sys 29m32.28s zfs send -i mypool/d at 2006_month_10 mypool/d at 2006_month_12 | zfs receive mypool/d at 2006_month_12 real 4h49m27.54s user
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
On 29/10/2007, Tek Bahadur Limbu <teklimbu at wlink.com.np> wrote: > I created a ZFS file system like the following with /mypool/cache being > the partition for the Squid cache: > > 18:51:27 root at solaris:~$ zfs list > NAME USED AVAIL REFER MOUNTPOINT > mypool 478M 31.0G 10.0M /mypool > mypool/cache 230M 9.78G 230M
2006 Sep 15
1
[Blade 150] ZFS: extreme low performance
Hi forum, I''m currently a little playing around with ZFS on my workstation. I created a standard mirrored pool over 2 disk-slices. # zpool status Pool: mypool Status: ONLINE scrub: Keine erforderlich config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t0d0s4 ONLINE
2008 Jun 07
4
Mixing RAID levels in a pool
Hi, I had a plan to set up a zfs pool with different raid levels but I ran into an issue based on some testing I''ve done in a VM. I have 3x 750 GB hard drives and 2x 320 GB hard drives available, and I want to set up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to the same pool. I tested detaching a drive and it seems to seriously mess up the entire pool and I
2010 May 01
7
Virtual to physical migration
I had created a virtualbox VM to test out opensolaris. I updated to latest dev build and set my things up. Tested pools and various configs/commands. Learnt format/partition etc. And then, I wanted to move this stuff to a solaris partition on the physical disk. VB provides physical disk access. I put my solaris partition in there and created slices for root, swap etc. in that partition inside the
2011 Oct 05
3
R CMD check
Dear R-Group, I have a function that sorts a data frame and oneo of the lines in the function is: vars <- unlist(strsplit(formc, "[\\+\\-]")) The function works fine and the above line is always reached. However, when I include the function in a package and run "R CMD check pkgname" it gives this error message: '\+' is an unrecognized escape in character
2007 Jun 15
3
Virtual IP Integration
Has there been any discussion here about the idea integrating a virtual IP into ZFS. It makes sense to me because of the integration of NFS and iSCSI with the sharenfs and shareiscsi properties. Since these are both dependent on an IP it would be pretty cool if there was also a virtual IP that would automatically move with the pool. Maybe something like "zfs set ip.nge0=x.x.x.x mypool"
2007 Oct 29
9
zpool question
hello folks, I am running Solaris 10 U3 and I have small problem that I dont know how to fix... I had a pool of two drives: bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 emcpower0a ONLINE 0 0 0 emcpower1a ONLINE
2008 Dec 17
10
Cannot remove a file on a GOOD ZFS filesystem
Hello all, First off, i''m talking about a SXDE build 89. Sorry if that was discussed here before, but i did not find anything related on the archives, and i think is a "weird" issue... If i try to remove a specific file, i got: # rm file1 rm: file1: No such file or directory # rm -rf dir2 rm: Unable to remove directory dir2: Directory not empty Take a look: ------- cut
2008 Mar 10
2
[Bug 701] New: ''zpool create -o keysource='' fails on sparc - invalid argument
http://defect.opensolaris.org/bz/show_bug.cgi?id=701 Summary: ''zpool create -o keysource='' fails on sparc - invalid argument Classification: Development Product: zfs-crypto Version: unspecified Platform: SPARC/sun4u OS/Version: Solaris Status: NEW Severity: minor Priority:
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08, on a SunFire t5220, and this is our first rollout of ZFS and Zpools. Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0) Created Zpool my_pool as RaidZ using 5 disks + 1 spare: c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0 I am working on alerting & recovery plans for disks failures in the zpool. As a test, I have pulled disk
2008 Jan 24
1
zfs showing more filesystem using ls than df actually has
Platform T2000 SunOS ccluatdwunix1 5.10 Generic_125100-10 sun4v sparc SUNW,Sun-Fire-T200 I have a user that stated zfs is allocating more file system space than actually available via ls command versus what df -k shows. He stated he used the mkfile to verify if ZFS quota was working. He executes "ls -s" to report usage which reports more allocated than available from "df
2013 Jun 07
2
Setting RBD cache parameters for libvirt+qemu
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: [libvirt] [PATCH] Forbid use of ':' in RBD pool names ...People are known to be abusing the lack of escaping in current libvirt to pass arbitrary args to QEMU. I am one of those