Displaying 20 results from an estimated 13000 matches similar to: "ZFS, home and Linux"
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2008 Dec 17
10
Cannot remove a file on a GOOD ZFS filesystem
Hello all,
First off, i''m talking about a SXDE build 89. Sorry if that was discussed here before, but i did not find anything related on the archives, and i think is a "weird" issue...
If i try to remove a specific file, i got:
# rm file1
rm: file1: No such file or directory
# rm -rf dir2
rm: Unable to remove directory dir2: Directory not empty
Take a look:
------- cut
2007 Feb 26
15
Efficiency when reading the same file blocks
if you have N processes reading the same file sequentially (where file size is much greater than physical memory) from the same starting position, should I expect that all N processes finish in the same time as if it were a single process?
In other words, if you have one process that reads blocks from a file, is it "free" (meaning no additional total I/O cost) to have another process
2007 Mar 28
20
Gzip compression for ZFS
Adam,
With the blog entry[1] you''ve made about gzip for ZFS, it raises
a couple of questions...
1) It would appear that a ZFS filesystem can support files of
varying compression algorithm. If a file is compressed using
method A but method B is now active, if I truncate the file
and rewrite it, is A or B used?
2) The question of whether or not to use bzip2 was raised in
the
2007 Nov 15
3
read/write NFS block size and ZFS
Hello all...
I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune
''recordsize'', or would that not be worth the effort?
--
Regards,
Jeremy
2008 Mar 13
3
Round-robin NFS protocol with ZFS
Hello all,
I was thinking if such scenario could be possible:
1 - Export/import a ZFS filesystem in two solaris servers.
2 - Export that filesystem (NFS).
3 - Mount that filesystem on clients in two different mount points (just to authenticate in both servers/UDP).
4a - Use some kind of "man-in-the middle" to auto-balance the connections (the same IP on servers)
or
4b - Use different
2008 Apr 29
24
recovering data from a dettach mirrored vdev
Hi,
my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data)
thanks in advance for
2010 May 31
3
zfs permanent errors in a clone
$ zfs list -t filesystem
NAME USED AVAIL REFER MOUNTPOINT
datapool 840M 25.5G 21K /datapool
datapool/virtualbox 839M 25.5G 839M /virtualbox
mypool 8.83G 6.92G 82K /mypool
mypool/ROOT 5.48G 6.92G 21K legacy
mypool/ROOT/May25-2010-Image-Update
2006 Oct 10
3
Solaris 10 / ZFS file system major/minor number
Hi,
In migrating from **VM to ZFS am I going to have an issue with Major/Minor numbers with NFS mounts? Take the following scenario.
1. NFS clients are connected to an active NFS server that has SAN shared storage between the active and standby nodes in a cluster.
2. The NFS clients are using the major/minor numbers on the active node in the cluster to communicate to the NFS active server.
3.
2007 Apr 16
10
zfs send/receive question
Hello folks, I have a question and a small problem... I did try to replicate my
zfs with all the snaps, so I did run few commands:
time zfs send mypool/d at 2006_month_10 | zfs receive mypool2/d at 2006_month_10
real 6h35m12.34s
user 0m0.00s
sys 29m32.28s
zfs send -i mypool/d at 2006_month_10 mypool/d at 2006_month_12 | zfs receive mypool/d at 2006_month_12
real 4h49m27.54s
user
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well,
and the choice of which disk got which name was perfect!
But there seems to be an odd anomaly (at least with b132) .
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Rebooted from c0t0d0s0
zpool split rpool spool
Rebooted from c0t0d0s0, both rpool and spool were mounted
2010 Jun 11
9
Are recursive snapshot destroy and rename atomic too?
In another thread recursive snapshot creation was found atomic so that
it is done quickly, and more important, all at once or nothing at all.
Do you know if recursive destroying and renaming of snapshots are atomic too?
Regards
Henrik Heino
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
On 29/10/2007, Tek Bahadur Limbu <teklimbu at wlink.com.np> wrote:
> I created a ZFS file system like the following with /mypool/cache being
> the partition for the Squid cache:
>
> 18:51:27 root at solaris:~$ zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> mypool 478M 31.0G 10.0M /mypool
> mypool/cache 230M 9.78G 230M
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the
result was the machine grinding to a halt while copying some large
(.wav) files to it from another filesystem in the same pool.
The system became very unresponsive, taking several seconds to echo
keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty
of grunt for this.
Comments?
Ian
2006 Sep 11
95
Proposal: multiple copies of user data
Here is a proposal for a new ''copies'' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
--matt
A. INTRODUCTION
ZFS stores multiple copies of all metadata. This is accomplished by
storing up to three DVAs (Disk Virtual Addresses) in each block pointer.
This feature is known as "Ditto Blocks". When
2006 Sep 11
95
Proposal: multiple copies of user data
Here is a proposal for a new ''copies'' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
--matt
A. INTRODUCTION
ZFS stores multiple copies of all metadata. This is accomplished by
storing up to three DVAs (Disk Virtual Addresses) in each block pointer.
This feature is known as "Ditto Blocks". When
2007 Feb 03
4
Which label a ZFS/ZPOOL device has ? VTOC or EFI ?
Hi All,
ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs on it. Is it true ?
I formatted a device with VTOC lable and I created a ZFS file system on it.
Now which label the ZFS device has ? is it old VTOC or EFI ?
After creating the ZFS file system on a VTOC labeled disk, I am seeing the following warning messages.
Feb 3 07:47:00 scoobyb
2007 Apr 02
4
Convert raidz
Hi
Is it possible to convert live 3 disks zpool from raidz to raidz2
And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch.
Thanks
This message posted from opensolaris.org
2007 Oct 29
9
zpool question
hello folks, I am running Solaris 10 U3 and I have small problem that I dont
know how to fix...
I had a pool of two drives:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
emcpower0a ONLINE 0 0 0
emcpower1a ONLINE