similar to: suggestion: directory promotion to filesystem

Displaying 20 results from an estimated 9000 matches similar to: "suggestion: directory promotion to filesystem"

2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that''s already stored to add compression and now dedup seems to be a send / receive pipe similar to: zfs send -R <old fs>@snap | zfs recv -d <new fs> However, according to the man page,
2007 Mar 28
6
ZFS and UFS performance
We are running Solaris 10 11/06 on a Sun V240 with 2 CPUS and 8 GB of memory. This V240 is attached to a 3510 FC that has 12 x 300 GB disks. The 3510 is configured as HW RAID 5 with 10 disks and 2 spares and it''s exported to the V240 as a single LUN. We create iso images of our product in the following way (high-level): # mkfile 3g /isoimages/myiso # lofiadm -a /isoimages/myiso
2008 Jul 25
11
send/receive
I created snapshot for my whole zpool (zfs version 3): zfs snapshot -r tank@`date +%F_%T` then trid to send it to the remote host: zfs send tank at 2008-07-25_09:31:03 | ssh user at 10.0.1.14 -i identitykey ''zfs receive tank/tankbackup'' but got the error "zfs: command not found" since user is not superuser, even though it is in the root group. I found
2007 Aug 17
4
Privileges
Hi all! I need a non-root user to be able to perform zfs snapshots and rollbacks. Does anybody know what privileges that should be specified in /etc/user_attr ? Best regards, Lars-Erik Bj?rk
2007 Apr 10
15
Poor man''s backup by attaching/detaching mirror drives on a _striped_ pool?
Hi, one quick&dirty way of backing up a pool that is a mirror of two devices is to zpool attach a third one, wait for the resilvering to finish, then zpool detach it again. The third device then can be used as a poor man''s simple backup. Has anybody tried it yet with a striped mirror? What if the pool is composed out of two mirrors? Can I attach devices to both mirrors, let them
2006 Aug 25
4
Looking for confirmation.
Hi. I''ve almost all file system functions working. I started to run some heavy file system regression tests. They work. fsx wasn''t able to break my port, but the test you can find here: http://people.freebsd.org/~kan/fsstress.tar.gz broke it. My kernel panics on this assertion (zfs_dir.c): 749: mutex_exit(&dzp->z_lock); 750: 751: error =
2008 Sep 03
1
bugged sysinstall, bsdlabel, zfs, gmirror - recept for disaster :)
Hello there! Here's my story, hopefully some of you won't follow my steps and avoid some troubles :) Yesterday I've decided that's about time to test zfs functionality on my home server PC (i386 FreeBSD 7.1-pre) . A couple of weeks ago I bought new desktop PC (with SATA), so I had a bunch of PATA disks from old one to use in server. Lucky me - there was 3 HDD at size 40GB -
2007 Aug 07
5
Extending RAIDZ.
Yeah:) I''d like to work on this. Here are my first observations: - We need to call vdev_op_asize method with additonal ''offset'' argument, - We need to move data to new disk starting from the very begining, so we can''t reuse scrub/resilver code which does tree-walk through the data. Below you can see how I imagine to extend RAIDZ. Here is the legend:
2007 Apr 26
7
device name changing
Hi. If I create a zpool with the following command: zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7 and after a reboot the device names for some reason are changed so da2 and da5 are swapped, either by altering the LUN setting on the storage or by switching cables/swapping disks etc.? How will zfs handle that? Will it simply acknowledge that all devices are present and the pool is
2008 Jan 31
7
mounting a copy of a zfs pool /file system while orginal is still active
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the snapshots and import the pool while the original is still active. Right now all I am able to do is export
2006 Sep 13
10
Snapshots and backing store
Hi, There''s something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a "backing-store-file" option in a future release ? In the same idea, it would be great to
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor grants for ZFS. Since all of the ZFS core contributors grants are set to expire on 02-24-2009 we need to renew the members that are still contributing at core contributor levels. We should also add some new members to both Contributor and Core contributor levels. First the current list of Core contributors: Bill
2007 Apr 30
4
need some explanation
Hi, OS : Solaris 10 11/06 zpool list doesn''t reflect pool usage stats instantly. Why? # ls -l total 209769330 -rw------T 1 root root 107374182400 Apr 30 14:28 deleteme # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT wo 136G 100G 36.0G 73% ONLINE - # rm deleteme # zpool list NAME SIZE
2006 Mar 03
5
flag day: ZFS on-disk format change
Summary: If you use ZFS, do not downgrade from build 35 or later to build 34 or earlier. This putback (into Solaris Nevada build 35) introduced a backwards- compatable change to the ZFS on-disk format. Old pools will be seamlessly accessed by the new code; you do not need to do anything special. However, do *not* downgrade from build 35 or later to build 34 or earlier. If you do so, some of
2008 Jan 24
1
zfs showing more filesystem using ls than df actually has
Platform T2000 SunOS ccluatdwunix1 5.10 Generic_125100-10 sun4v sparc SUNW,Sun-Fire-T200 I have a user that stated zfs is allocating more file system space than actually available via ls command versus what df -k shows. He stated he used the mkfile to verify if ZFS quota was working. He executes "ls -s" to report usage which reports more allocated than available from "df
2007 Jan 10
4
[osol-discuss] Re: bare metal ZFS ? How To ?
this is off list on purpose ? > run zpool import, it will search all attached storage and give you a list > of availible pools. then run zpool import poolname or add a -f if you > didn''t export before the install/upgrade. assume worst case someone walks up to you and drops an array on you. They say "its ZFS an'' I need that der stuff ''k? " all
2006 Sep 11
95
Proposal: multiple copies of user data
Here is a proposal for a new ''copies'' property which would allow different levels of replication for different filesystems. Your comments are appreciated! --matt A. INTRODUCTION ZFS stores multiple copies of all metadata. This is accomplished by storing up to three DVAs (Disk Virtual Addresses) in each block pointer. This feature is known as "Ditto Blocks". When
2006 Sep 11
95
Proposal: multiple copies of user data
Here is a proposal for a new ''copies'' property which would allow different levels of replication for different filesystems. Your comments are appreciated! --matt A. INTRODUCTION ZFS stores multiple copies of all metadata. This is accomplished by storing up to three DVAs (Disk Virtual Addresses) in each block pointer. This feature is known as "Ditto Blocks". When
2009 May 20
5
ZFS userquota groupquota test
I have been playing around with osol-nv-b114 version, and the ZFS user and group quotas. First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone else involved). I''m currently copying over one of the smaller user areas, and setting up their quotas, so I have yet to start large scale testing. But the initial work is very promising. (Just 90G data, 341694 accounts) Using
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface