Peter van Gemert
2006-Mar-07 12:16 UTC
[zfs-discuss] Place zfs file system on specific disks
Hello, Is there or will there be a possibility to specify the disks (or more precise a vdev) on which a new zfs file system has to be created in the pool? Greetings, Peter This message posted from opensolaris.org
Robert Milkowski
2006-Mar-07 12:42 UTC
[zfs-discuss] Re: Place zfs file system on specific disks
If it has to be created only on specified vdev (root vdev I assume) then why not to create separate pool with that vdev? This message posted from opensolaris.org
Peter van Gemert
2006-Mar-07 13:24 UTC
[zfs-discuss] Re: Place zfs file system on specific disks
My thinking is as follows. A pool is used to logically group related file systems. When I have to create multiple pools, I will loose this ability of logically grouping file systems. In some cases it would be more useful to have all related data in one pool and then have the possibility to define the disks (vdev''s) on which zfs file systems will be created. In other cases I just would like to know where my data is. This message posted from opensolaris.org
Darren J Moffat
2006-Mar-07 15:39 UTC
[zfs-discuss] Place zfs file system on specific disks
Peter van Gemert wrote:> Hello, > > Is there or will there be a possibility to specify the disks (or more precise a vdev) on which a new zfs file system has to be created in the pool?Why ? The whole point of ZFS is that you shouldn''t care. If you do care maybe it means you should have multiple pools. -- Darren J Moffat
Darren J Moffat
2006-Mar-07 15:45 UTC
[zfs-discuss] Re: Place zfs file system on specific disks
Peter van Gemert wrote:> My thinking is as follows. A pool is used to logically group related file systems. When I have to create multiple pools, I will loose this ability of logically grouping file systems. > > In some cases it would be more useful to have all related data in one pool and then have the possibility to define the disks (vdev''s) on which zfs file systems will be created. >but why ? What problem are you attempting to solve by wanting to specify the vdev ? -- Darren J Moffat
David Robinson
2006-Mar-07 17:33 UTC
[zfs-discuss] Place zfs file system on specific disks
I think that what Peter is after is to have two filesystems with different physical characteristics in the same pool, but he wants the ''near'' each other in the namespace. But pools can only have one distinct characteristic (e.g. RAID-Z). I think the solution is to have multiple pools, then instead of taking the default mount points of /poola/fs1 and /poolb/fs2, explicitly change the mount points on fs1 and fs2 to be in an external ''near'' place, such as /mystuff/fs1 and /mystuff/fs2. That way you get the namespace you want but different characteristics. -David On Mar 7, 2006, at 9:39 AM, Darren J Moffat wrote:> Peter van Gemert wrote: >> Hello, >> Is there or will there be a possibility to specify the disks (or >> more precise a vdev) on which a new zfs file system has to be >> created in the pool? > > Why ? > > The whole point of ZFS is that you shouldn''t care. If you do > care maybe it means you should have multiple pools. > > -- > Darren J Moffat > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
John Garner
2006-Mar-11 20:37 UTC
[zfs-discuss] Re: Re: Place zfs file system on specific disks
Imagine this scenario: I have two sites with 2 arrays at each site. Array 1 is a 3510 w/ 146G FC Drives, Array 2 is a 3511 w/ 500G SATA drives. Each array presents a 1TB lun to the host. Thus I have a total of 4TB of storage presented to me, 2TB of FC and 2TB of SATA. What I want to do is setup a pool to run a database on. I want the datafiles on the FC drives and I want the archivelogs on the SATA drives because a) its cheaper and b) the 3511 rocks at sequential write patterns and c) did i mention that it is cheaper? ;-) For obvious reasons, I want to have my data mirrored between the two sites, not between the arrays at the same site. I also dont want my active datafiles being placed on SATA drives. I also dont want a scenario where I have data mirrored such that one copy is on FC and another is on SATA. For administrative purposes, it is much easier to just have 1 pool for my database. I could just create two pools, but then when I bounce my data between the two servers, i have to deal with two pools rather than just one. I personally have just that setup: I have two datacenters 3 miles apart. I have 8GB of FC connectivity between the two sites. The latency is such that i can not distinguish between an array at the local site and an array at the remote site, thus i can get away with sync mirroring. I want to make sure that if a datacenter burns down, all of my data is fully replicated to the other site. I also have a mix of 3510''s and 3511''s. Today I use VCS w/ VXVM and manually layout my volumes so that I do not shoot myself in the foot in the event of an event that brings the site down (fire/power/flood/etc). VxVM gives me the option of either a) placing volumes where it wants or b) letting me place the volumes where I want. One is easier, one requires to have a clue. The important thing here is that the system (VXVM) does not assume that it knows more than you do (it doesnt, and it never will). This message posted from opensolaris.org
Richard Elling
2006-Mar-13 14:55 UTC
[zfs-discuss] Re: Re: Place zfs file system on specific disks
> Imagine this scenario: > > I have two sites with 2 arrays at each site. Array 1 > is a 3510 w/ 146G FC Drives, Array 2 is a 3511 w/ > 500G SATA drives. Each array presents a 1TB lun to > the host. Thus I have a total of 4TB of storage > presented to me, 2TB of FC and 2TB of SATA. > > What I want to do is setup a pool to run a database > on. I want the datafiles on the FC drives and I want > the archivelogs on the SATA drives because a) its > cheaper and b) the 3511 rocks at sequential write > patterns and c) did i mention that it is cheaper? > ;-) > > For obvious reasons, I want to have my data mirrored > between the two sites, not between the arrays at the > same site. I also dont want my active datafiles being > placed on SATA drives. I also dont want a scenario > where I have data mirrored such that one copy is on > FC and another is on SATA.This is a common request. But please realize that it is not a good disaster recovery plan. Rather, it is simply a mirror.> For administrative purposes, it is much easier to > just have 1 pool for my database. I could just create > two pools, but then when I bounce my data between the > two servers, i have to deal with two pools rather > than just one.I think you need 2 pools. One for datafiles and one for archive logs.> I personally have just that setup: I have two > datacenters 3 miles apart. I have 8GB of FC > connectivity between the two sites. The latency is > such that i can not distinguish between an array at > the local site and an array at the remote site, thus > i can get away with sync mirroring. I want to make > sure that if a datacenter burns down, all of my data > is fully replicated to the other site. I also have a > mix of 3510''s and 3511''s. > > Today I use VCS w/ VXVM and manually layout my > volumes so that I do not shoot myself in the foot in > the event of an event that brings the site down > (fire/power/flood/etc). VxVM gives me the option of > either a) placing volumes where it wants or b) > letting me place the volumes where I want. One is > easier, one requires to have a clue.Beware that you are creating a dependency between sites. As long as you have a mirror, then the two sites are not independent. This is why I say this is not a disaster recovery plan. A disaster recovery plan must also consider independence and the complications imposed by such independence.> The important thing here is that the system (VXVM) > does not assume that it knows more than you do (it > doesnt, and it never will).I envision a different usage for ZFS. With ZFS, snapshots are essentially free. What you want for your disaster recovery site is a consistent snapshot. This will be easier to do with ZFS than most other file system/LVM combinations. -- richard This message posted from opensolaris.org