Hello, First off ZFS looks very exciting. Our biggest problem is I/O bandwidth to clients. We have large storage pools, and each one is an island. They can only be accessed via a single head. How does ZFS plan on dealing with distributed metadata? A zettabyte of storage under a single namespace is nice, but if only one host at a time can access it, it seems a bit pointless. Thanks!
On Mon, 2005-11-21 at 00:23, Barry Robison wrote:> Hello, > > First off ZFS looks very exciting. Our biggest problem is I/O bandwidth > to clients. We have large storage pools, and each one is an island. They > can only be accessed via a single head. How does ZFS plan on dealing > with distributed metadata? A zettabyte of storage under a single > namespace is nice, but if only one host at a time can access it, it > seems a bit pointless.Whats wrong with NFSv4 for the other hosts ? -- Darren J Moffat
> Whats wrong with NFSv4 for the other hosts ? >Well in this case the other (1500) hosts are Windows.. of course we could use SFU or Hummingbird for NFS, but that wouldn''t solve the bandwidth issue. Only so many machines can be physically attached to the spindles. Those hosts become the bottleneck to the bulk of the clients. Basically I''m wondering if Sun/ZFS will try to compete in the Lustre/GFS/GPFS/Polyserve space. cheers =)
Barry Robison wrote:> First off ZFS looks very exciting. Our biggest problem is I/O bandwidth > to clients. We have large storage pools, and each one is an island. They > can only be accessed via a single head. How does ZFS plan on dealing > with distributed metadata? A zettabyte of storage under a single > namespace is nice, but if only one host at a time can access it, it > seems a bit pointless.It''s a ways off (i.e., we haven''t started working on it yet), but we have plans to enhance ZFS to be a fully cluster-accessible file system in the future. Now that the ZFS team have made their first release, and have mostly recovered from the party afterwards, we hope to begin real design work on these ehnahcements soon. -- Ed Gould Sun Microsystems File System Architect Sun Cluster ed.gould at sun.com 17 Network Circle +1.650.786.4937 M/S UMPK17-201 x84937 Menlo Park, CA 94025
Ed Gould wrote:> It''s a ways off (i.e., we haven''t started working on it yet), but we > have plans to enhance ZFS to be a fully cluster-accessible file system > in the future. Now that the ZFS team have made their first release, and > have mostly recovered from the party afterwards, we hope to begin real > design work on these ehnahcements soon.What kinds of timeframes are we talking of? Nevada? This message posted from opensolaris.org
Mikael Gueck wrote:> Ed Gould wrote: > >>It''s a ways off (i.e., we haven''t started working on it yet), but we >>have plans to enhance ZFS to be a fully cluster-accessible file system >>in the future. Now that the ZFS team have made their first release, and >>have mostly recovered from the party afterwards, we hope to begin real >>design work on these ehnahcements soon. > > What kinds of timeframes are we talking of? Nevada?It''s too soon to tell. We haven''t begun any serious design work, yet. -- Ed Gould Sun Microsystems File System Architect Sun Cluster ed.gould at sun.com 17 Network Circle +1.650.786.4937 M/S UMPK17-201 x84937 Menlo Park, CA 94025
On Mon, Barry Robison wrote:> > >Whats wrong with NFSv4 for the other hosts ? > > > > Well in this case the other (1500) hosts are Windows.. of > course we could use SFU or Hummingbird for NFS, but that > wouldn''t solve the bandwidth issue. > > Only so many machines can be physically attached to the > spindles. Those hosts become the bottleneck to the bulk of > the clients. > > Basically I''m wondering if Sun/ZFS will try to compete in > the Lustre/GFS/GPFS/Polyserve space.NFSv4 with the proposed pNFS extensions for NFSv4.1 distributing the I/O worklaod amongst various data paths will be possible increase the available bandwidth. Spencer