Hi, There''s something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a "backing-store-file" option in a future release ? In the same idea, it would be great to have some sort of propertie to add a disk/LUN/physical_space to a pool, only reserved to backing-store. At now, the only thing I see to disallow users to use my backing-store space for their usage is to put quota. Nico This message posted from opensolaris.org
On Wed, Sep 13, 2006 at 07:38:22AM -0700, Nicolas Dorfsman wrote:> There''s something really bizarre in ZFS snaphot specs : "Uses no separate backing store." .It''s not at all bizarre once you understand how ZFS works. I''d suggest reading through some of the documentation available at http://www.opensolaris.org/os/community/zfs/docs/ , in paricular the "Slides" available there.> Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. > > Is there any chance to have a "backing-store-file" option in a future release ?Doing this would have a significant hit on performance if nothing else. Currently when you do a write to a volume which is snapshotted the system has to : 1) Write the new data (Yes, that''s it - one step. OK, so I''m ignoring metadata, but...) If there was a dedicated backing store, this would change to : 1) Read the old data 2) Write the old data to the backing store 3) Write the new data 4) Free the old data (ok, so that''s metadata only, but hey) ZFS isn''t copy-on-write in the same way that things like ufssnap are. ufssnap is copy-on-write in that when you write something, it copies out the old data and writes it somewhere else (the backing store). ZFS doesn''t need to do this - it simply writes the new data to a new location, and leaves the old data where it is. If that old data is needed for a snapshot then it''s left unchanged, if it''s not then it''s freed. Scott
Nicolas Dorfsman wrote:> Hi, > > There''s something really bizarre in ZFS snaphot specs : "Uses no > separate backing store." . > > Hum...if I want to mutualize one physical volume somewhere in my SAN > as THE snaphots backing-store...it becomes impossible to do ! > Really bad. > > Is there any chance to have a "backing-store-file" option in a future > release ? > > In the same idea, it would be great to have some sort of propertie to > add a disk/LUN/physical_space to a pool, only reserved to > backing-store. At now, the only thing I see to disallow users to use > my backing-store space for their usage is to put quota.If you want to copy your filesystems (or snapshots) to other disks, you can use ''zfs send'' to send them to a different pool (which may even be on a different machine!). --matt
Well.> ZFS isn''t copy-on-write in the same way that things > like ufssnap are. > ufssnap is copy-on-write in that when you write > something, it copies out > the old data and writes it somewhere else (the > backing store). ZFS doesn''t > need to do this - it simply writes the new data to a > new location, and > leaves the old data where it is. If that old data is > needed for a snapshot > then it''s left unchanged, if it''s not then it''s > freed.We need to think ZFS as ZFS, and not as a new filesystem ! I mean, the whole concept is different. So. What could be the best architecture ? With UFS, I used to have separate metadevices/LUNs for each application. With ZFS, I thought it would be nice to use a separate pool for each application. But, it means multiply snapshot backing-store OR dynamically remove/add this space/LUN to pool where we need to do backups. Knowing that I can''t serialize backups, my only option is to multiply reservation for backing-stores. Uh ! Another option would be to create a single pool and put all apllications in it...don''t think this as a solution. Any suggestion ? This message posted from opensolaris.org
> If you want to copy your filesystems (or snapshots) > to other disks, you > can use ''zfs send'' to send them to a different pool > (which may even be > on a different machine!).Oh no ! It means copy the whole filesystem. The target here is definitively to snapshot the filesystem and them backup the snapshot. This message posted from opensolaris.org
Matthew Ahrens wrote:> Nicolas Dorfsman wrote: >> Hi, >> >> There''s something really bizarre in ZFS snaphot specs : "Uses no >> separate backing store." . >> >> Hum...if I want to mutualize one physical volume somewhere in my SAN >> as THE snaphots backing-store...it becomes impossible to do ! >> Really bad. >> >> Is there any chance to have a "backing-store-file" option in a future >> release ? >> >> In the same idea, it would be great to have some sort of propertie to >> add a disk/LUN/physical_space to a pool, only reserved to >> backing-store. At now, the only thing I see to disallow users to use >> my backing-store space for their usage is to put quota. > > If you want to copy your filesystems (or snapshots) to other disks, you > can use ''zfs send'' to send them to a different pool (which may even be > on a different machine!). >The confusion is probably around the word snapshot and all its various usage over the years. The one particular case people will probably slam their head into a wall is exporting snapshots to other hosts. If you can get the customer or tech to think in terms of where they want the data and how instead of snapshots, or lun copies, or whatever, it makes for an easier conversation.
Nicolas Dorfsman wrote:> We need to think ZFS as ZFS, and not as a new filesystem ! I mean, > the whole concept is different.Agreed.> So. What could be the best architecture ?What is the problem?> With UFS, I used to have separate metadevices/LUNs for each > application. With ZFS, I thought it would be nice to use a separate > pool for each application.Ick. It would be much better to have one pool, and a separate filesystem for each application.> But, it means multiply snapshot backing-store OR dynamically > remove/add this space/LUN to pool where we need to do backups.I don''t understand this statement. What problem are you trying to solve? If you want to do backups, simply take a snapshot, then point your backup program at it. If you want faster incremental backups, use ''zfs send -i'' to generate the file to backup. --matt
Hi Matt,> > So. What could be the best architecture ? > > What is the problem?I/O profile isolation versus snap backing-store ''reservation'' optimisation.> > With UFS, I used to have separate metadevices/LUNs for each > > application. With ZFS, I thought it would be nice to use a separate > > pool for each application. > > Ick. It would be much better to have one pool, and a > separate > filesystem for each application.Including performance considerations ? For instance, if I have two Oracle Databases with two I/O profiles (TP versus Batch)...what would be the best : 1) Two pools, each one on two LUNs. Each LUN distributed on n trays. 2) One pool on one LUN. This LUN distributed on 2 x n trays. 3) One pool striped on two LUNs. Each LUN distributed on n trays.> > But, it means multiply snapshot backing-store OR dynamically > > remove/add this space/LUN to pool where we need to do backups.> I don''t understand this statement. What problem are > you trying to solve? > If you want to do backups, simply take a snapshot, then point > your backup program at it.With one pool, no problem. With n pools, my problem is the space used by the snapshot. With the COW method of UFS snapshot I can put all backing-stores on one single volume. With ZFS snapshot, it''s conceptualy impossible. This message posted from opensolaris.org
> Including performance considerations ? > For instance, if I have two Oracle Databases with two I/O profiles (TP versus Batch)...what would be the best : > > 1) Two pools, each one on two LUNs. Each LUN distributed on n trays. > 2) One pool on one LUN. This LUN distributed on 2 x n trays. > 3) One pool striped on two LUNs. Each LUN distributed on n trays.Good question. I''ll bet there''s no way to determine that without testing. It may be that the extra extra performance from having the additional lun(s) within a single pool outweighs any performance issues from having both workloads use the same storage.> With one pool, no problem. > > With n pools, my problem is the space used by the snapshot. With the > COW method of UFS snapshot I can put all backing-stores on one single > volume. With ZFS snapshot, it''s conceptualy impossible.Yup. That''s due to the differences in how those snapshots are implemented. In the future you may be able to add and remove storage from pools dynamically. In such a case, it could be possible to bring a disk into a pool, increase disk usage during a snapshot, delete the snapshot, then remove the disk. Disk removal would require copying data and be a performance hit. Then you go and do the same thing with the other pools. Today this isn''t possible because you cannot migrate data off of a VDEV to reclaim the storage. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Matthew Ahrens wrote:> Nicolas Dorfsman wrote: >> We need to think ZFS as ZFS, and not as a new filesystem ! I mean, >> the whole concept is different. > > Agreed. > >> So. What could be the best architecture ? > > What is the problem? > >> With UFS, I used to have separate metadevices/LUNs for each >> application. With ZFS, I thought it would be nice to use a separate >> pool for each application. > > Ick. It would be much better to have one pool, and a separate > filesystem for each application.I agree but can you set performance boundaries based on the filesystem? The pool level seems to be the place to do such things. For example making sure an application has a set level of iops at its disposal.
On Sep 13, 2006, at 10:52, Scott Howard wrote:> It''s not at all bizarre once you understand how ZFS works. I''d suggest > reading through some of the documentation available at > http://www.opensolaris.org/os/community/zfs/docs/ , in paricular the > "Slides" available there.The presentation that ''goes'' with those slides is available online: http://www.sun.com/software/solaris/zfs_learning_center.jsp