Buried in the announcements last week from Sun is the Sun Flash Module. http://www.sun.com/storage/flash/module.jsp I wanted to bring this up on this forum because it represents an interesting way to add SSD technology to a system design. The new Sun Blade X6275 has slots for these SSDs and I expect to see future designs follow suit. Brief specifications: SO-DIMM form factor (very small) 64 MByte DRAM buffers 24 GBytes available space Enterprise class SATA interface I think this development is significant for several reasons. 1. It represents the beginning of the end for 5.25", 3.5", 2.5", and 1.8" disk form factors. 2. You can now build an SSD disk array into a wide variety of form factors which would be impossible to do with spinning media. 3. The (common) requirement for mirrored boot disks should prove obsolete. 4. Will be a natural fit for ZFS, particularly for ZFS root systems. 5. Not likely to be hot-pluggable, but do we really care? Thoughts? -- richard
Richard Elling wrote:> Buried in the announcements last week from Sun is the Sun Flash Module. > http://www.sun.com/storage/flash/module.jsp > > I wanted to bring this up on this forum because it represents an > interesting > way to add SSD technology to a system design. The new Sun Blade X6275 > has slots for these SSDs and I expect to see future designs follow suit. > > Brief specifications: > SO-DIMM form factor (very small) > 64 MByte DRAM buffers > 24 GBytes available space > Enterprise class > SATA interface > > I think this development is significant for several reasons. > > 1. It represents the beginning of the end for 5.25", 3.5", 2.5", and 1.8" > disk form factors. > > 2. You can now build an SSD disk array into a wide variety of form > factors > which would be impossible to do with spinning media. > > 3. The (common) requirement for mirrored boot disks should prove > obsolete. > > 4. Will be a natural fit for ZFS, particularly for ZFS root systems. > > 5. Not likely to be hot-pluggable, but do we really care? > > Thoughts?It would be really interesting if there was an accompanying motherboard (or [23].5'''' case) with SATA power and data connectors. As it stands, there isn''t a convenient way for Joe public (ie. me) to easily evaluate one. It does represent the next big thin in storage, but it risks languishing in a corner unless actively promoted in an easy to use form. Or until a company with more aggressive marketing picks up the idea and grabs the market. -- Ian.
On Sat, 18 Apr 2009, Ian Collins wrote:> > It does represent the next big thin in storage, but it risks languishing in a > corner unless actively promoted in an easy to use form. Or until a company > with more aggressive marketing picks up the idea and grabs the market.Violin (http://violin-memory.com/) has been selling solid state storage for a couple of years which uses a tall "DIMM" format. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Fri, 17 Apr 2009, Richard Elling wrote:> > Brief specifications: > SATA interface > > Thoughts?SATA is so "yesterday". It represents "in the box" thinking. Sun engineering should still be capable of thinking "outside the box". Considerable optimizations/improvements are possible by erradicating legacy interfaces and employing the same sort of "outside the box" thinking that produced zfs. It is time to short-circuit that legacy storage stack. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Fri, 17 Apr 2009, Richard Elling wrote: >> >> Brief specifications: >> SATA interface >> >> Thoughts? > > SATA is so "yesterday". It represents "in the box" thinking. Sun > engineering should still be capable of thinking "outside the box". > Considerable optimizations/improvements are possible by erradicating > legacy interfaces and employing the same sort of "outside the box" > thinking that produced zfs. It is time to short-circuit that legacy > storage stack. >Nice sentiment, but not a lot of use if they want people to use the module! ZFS while definitely the result of "outside the box" thinking it can be used on all exiting hardware. How far would it have progressed if it had required specialised hardware? -- Ian.
Bob Friesenhahn wrote:> On Fri, 17 Apr 2009, Richard Elling wrote: >> >> Brief specifications: >> SATA interface >> >> Thoughts? > > SATA is so "yesterday". It represents "in the box" thinking. Sun > engineering should still be capable of thinking "outside the box". > Considerable optimizations/improvements are possible by erradicating > legacy interfaces and employing the same sort of "outside the box" > thinking that produced zfs. It is time to short-circuit that legacy > storage stack.SATA is so last century :-) But is widely successful because of its simplicity and inexpensive implementations. I''m not convinced you could introduce a new block device transport today and get any traction. [Richard waits for the IB guys to chime in...] -- richard
comment below... Ian Collins wrote:> Richard Elling wrote: >> Buried in the announcements last week from Sun is the Sun Flash Module. >> http://www.sun.com/storage/flash/module.jsp >> >> I wanted to bring this up on this forum because it represents an >> interesting >> way to add SSD technology to a system design. The new Sun Blade X6275 >> has slots for these SSDs and I expect to see future designs follow suit. >> >> Brief specifications: >> SO-DIMM form factor (very small) >> 64 MByte DRAM buffers >> 24 GBytes available space >> Enterprise class >> SATA interface >> >> I think this development is significant for several reasons. >> >> 1. It represents the beginning of the end for 5.25", 3.5", 2.5", and >> 1.8" >> disk form factors. >> >> 2. You can now build an SSD disk array into a wide variety of form >> factors >> which would be impossible to do with spinning media. >> >> 3. The (common) requirement for mirrored boot disks should prove >> obsolete. >> >> 4. Will be a natural fit for ZFS, particularly for ZFS root systems. >> >> 5. Not likely to be hot-pluggable, but do we really care? >> >> Thoughts? > > It would be really interesting if there was an accompanying > motherboard (or [23].5'''' case) with SATA power and data connectors. > As it stands, there isn''t a convenient way for Joe public (ie. me) to > easily evaluate one. > > It does represent the next big thin in storage, but it risks > languishing in a corner unless actively promoted in an easy to use > form. Or until a company with more aggressive marketing picks up the > idea and grabs the market.The SSD is not a Sun design, so you don''t have to worry about it being a "Sun special." Several SSD vendors are already in this space and the barrier to entry is low. What is new is that a major computer vendor is adopting it. A similar device is available from InnoDisk, which uses the standard SATA/SAS connector, but you can see that it is not as mechanically simple as an SO-DIMM. http://www.innodisk.com/production.jsp?flashid=85 The first large market opportunity is high-density servers, which can replace slow CF or big 2.5" disks very economically. I think the laptop market is also a potential target, at least for a higher density (MLC) design. -- richard
On Sat, Apr 18, 2009 at 12:51 AM, Richard Elling <richard.elling at gmail.com> wrote:> Buried in the announcements last week from Sun is the Sun Flash Module. > http://www.sun.com/storage/flash/module.jsp > > I wanted to bring this up on this forum because it represents an interesting > way to add SSD technology to a system design. ?The new Sun Blade X6275 > has slots for these SSDs and I expect to see future designs follow suit. > > Brief specifications: > ?SO-DIMM form factor (very small) > ?64 MByte DRAM buffers > ?24 GBytes available space > ?Enterprise class > ?SATA interface > > I think this development is significant for several reasons. > > 1. It represents the beginning of the end for 5.25", 3.5", 2.5", and 1.8" > disk form factors. > > 2. You can now build an SSD disk array into a wide variety of form factors > which would be impossible to do with spinning media. > > 3. The (common) requirement for mirrored boot disks should prove > obsolete.Why? Is the possibility of component or path failure and data corruption so close to zero? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Peter Tribble wrote:> > Why? Is the possibility of component or path failure and data corruption > so close to zero? >Yes, actually. To the first order, the failure rate of such devices is based on the number of active devices. A modern disk may have 10+ active devices, not counting the motor, read head, or media. A Flash Storage Module may have 9 total devices, including the media: 8 flash chips + 1 controller. The flash chips can be internally correctable (eg. ECC) and include space for sparing, to accommodate the endurance requirements. Net-net, the flash storage module will be 2x-4x more reliable than any disk drive. Somewhere around here I have a slide which shows why disk reliability is more important than RAID... -- richard
On Sat, 18 Apr 2009, Ian Collins wrote:>> > Nice sentiment, but not a lot of use if they want people to use the module! > ZFS while definitely the result of "outside the box" thinking it can be used > on all exiting hardware. How far would it have progressed if it had required > specialised hardware?It seems wrong to use a SCSI or SATA stack to access a hardware device like this. We could have used SCSI or SATA to access our video card, but we chose not to because there are more efficient models. Likewise a new type of "short stack" can be invented along with a new hardware interface which is otimum for accessing memory-based low-latency non-volatile devices. This "short stack" can fit under zfs like any other block-oriented storage device. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Sat, 18 Apr 2009, Ian Collins wrote: >>> >> Nice sentiment, but not a lot of use if they want people to use the >> module! ZFS while definitely the result of "outside the box" thinking >> it can be used on all exiting hardware. How far would it have >> progressed if it had required specialised hardware? > > It seems wrong to use a SCSI or SATA stack to access a hardware device > like this. We could have used SCSI or SATA to access our video card, > but we chose not to because there are more efficient models. Likewise > a new type of "short stack" can be invented along with a new hardware > interface which is otimum for accessing memory-based low-latency > non-volatile devices. This "short stack" can fit under zfs like any > other block-oriented storage device.The win is nonvolatile main memory. When we get this on a large, fast scale (and it will happen in our lifetime :-) then we can begin to forget about file systems, with an interim step through ramdisks. This already exists for many household/personal devices today, so it is really just a wait-until-Moore''s-Law-catches-up game. -- richard
On Sat, 18 Apr 2009, Richard Elling wrote:> > The win is nonvolatile main memory. When we get this on a large, > fast scale (and it will happen in our lifetime :-) then we can begin > to forget about file systems, with an interim step through ramdisks. > This already exists for many household/personal devices today, so > it is really just a wait-until-Moore''s-Law-catches-up game.Yes, I am eagerly waiting for the memory-mapped non-volatile mass storage device with all the safeguards of zfs and performance gradually approaching that of main memory. It is interesting that really primitive storage devices found in low-power devices already have an edge in this regard. Using SATA feels like we are just poking at it with a long stick. It blocks progress. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Apr 18, 2009, at 11:27, Richard Elling wrote:> The win is nonvolatile main memory. When we get this on a large, > fast scale (and it will happen in our lifetime :-) then we can begin > to forget about file systems, with an interim step through ramdisks."Tape is dead, Disk is tape, Flash is disk, RAM locality is king." -- Jim Gray http://research.microsoft.com/en-us/um/people/gray/talks/Flash_is_Good.ppt http://www.signallake.com/innovation/Flash_is_Good.pdf http://queue.acm.org/detail.cfm?id=864078 "Disk Is The New Tape?Flash Is The New Disk" -- Dave Hitz http://blogs.netapp.com/dave/2008/11/disk-is-the-new.html This is also kind of reflects the philosophy of the Varnish web proxy:> Well, today computers really only have one kind of storage, and it > is usually some sort of disk, the operating system and the virtual > memory management hardware has converted the RAM to a cache for the > disk storage.http://varnish.projects.linpro.no/wiki/ArchitectNotes
On Fri, Apr 17 at 20:07, Bob Friesenhahn wrote:> On Fri, 17 Apr 2009, Richard Elling wrote: >> >> Brief specifications: >> SATA interface >> >> Thoughts? > > SATA is so "yesterday". It represents "in the box" thinking.While SATA itself has been around for a while, and using it to plug in a single 1.5TB or whatever rotating drive is standard in-the-box thinking, using SATA in new ways (DIMM form factor or whatever) allows Sun (and others) to leverage a lot of work by other companies into new and interesting directions. The protocol is generally sound and the interface is reasonably easy to build, and perhaps more importantly, an IT manager can comprehend the merits of the design and say "yes I''ll pay money for this." If you think *too* outside the box all the time, you''ll need to stretch your one sale pretty far to keep food on the table. Seems like Sun is striking a decent balance to me. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
On Sat, Apr 18 at 9:56, Bob Friesenhahn wrote:> On Sat, 18 Apr 2009, Ian Collins wrote: >>> >> Nice sentiment, but not a lot of use if they want people to use the >> module! ZFS while definitely the result of "outside the box" thinking >> it can be used on all exiting hardware. How far would it have >> progressed if it had required specialised hardware? > > It seems wrong to use a SCSI or SATA stack to access a hardware device > like this. We could have used SCSI or SATA to access our video card, > but we chose not to because there are more efficient models. Likewise a > new type of "short stack" can be invented along with a new hardware > interface which is otimum for accessing memory-based low-latency > non-volatile devices. This "short stack" can fit under zfs like any > other block-oriented storage device.What is tall about the SATA stack? There''s not THAT much overhead in SATA, and there''s no reason you would need to support any legacy transfer modes or commands you weren''t interested in. -- Eric D. Mudama edmudama at mail.bounceswoosh.org
On Sat, 18 Apr 2009, Eric D. Mudama wrote:>> SATA is so "yesterday". It represents "in the box" thinking. > > While SATA itself has been around for a while, and using it to plug in > a single 1.5TB or whatever rotating drive is standard in-the-box > thinking, using SATA in new ways (DIMM form factor or whatever) allows > Sun (and others) to leverage a lot of work by other companies into new > and interesting directions.Sun did the DIMM thing back in 1993/1994 so this is not exactly new. Probably the SATA bit is new.> If you think *too* outside the box all the time, you''ll need to > stretch your one sale pretty far to keep food on the table. Seems > like Sun is striking a decent balance to me.The economic downturn may require more innovative thinking for survival. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Sat, 18 Apr 2009, Eric D. Mudama wrote:> > What is tall about the SATA stack? There''s not THAT much overhead in > SATA, and there''s no reason you would need to support any legacy > transfer modes or commands you weren''t interested in.If SATA is much more than a memcpy() then it is excessive overhead for a memory-oriented device. In fact, since the "device" is actually comprised of quite a few independent memory modules, it should be possible to schedule I/O for each independent memory module in parallel. A large storage system will be comprised of tens, hundreds or even thousands of independent memory modules so it does not make sense to serialize access via legacy protocols. The larger the storage device, the more it suffers from a serial protocol. Many years ago Sun implemented memory mapped files with copy-on-write semantics. Unfortunately, this was still tied to a legacy block device. It is time to think more in parallel and prepare for the future. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Sat, 18 Apr 2009, Eric D. Mudama wrote: >> >> What is tall about the SATA stack? There''s not THAT much overhead in >> SATA, and there''s no reason you would need to support any legacy >> transfer modes or commands you weren''t interested in. > > If SATA is much more than a memcpy() then it is excessive overhead for > a memory-oriented device. In fact, since the "device" is actually > comprised of quite a few independent memory modules, it should be > possible to schedule I/O for each independent memory module in > parallel. A large storage system will be comprised of tens, hundreds > or even thousands of independent memory modules so it does not make > sense to serialize access via legacy protocols. The larger the > storage device, the more it suffers from a serial protocol.It''s a mistake to think that flash looks similar to RAM. It doesn''t in lots of ways -- actually it looks more similar to a hard disk in many respects;-) It''s true that you will find lots of flash memory modules on an SSD. This is because they are slow, and in order to be able to make good use of the available SATA bandwidth, many are paralleled up so the data can be transferred in parallel to lots of them, so you are able to use a good proportion of the SATA bandwidth (think of it like a mini RAID0 array. In the case of the SATA disks we sell for X and T series systems, there are 10 parallel flash channels in each one, which enables the device to achieve about 85% of the theoretical SATA bandwidth (which is way higher than any single hard drive can do, except to its cache). Also, like a hard disk, flash blocks go bad, and again like a disk, the SSD has spare blocks to use as replacements, and includes bad block handling logic in its controller to map these in when required. Over the life of an Enterprise class SSD, the controller actually expects many more flash block failures than you would ever see on a working a hard disk, and there is consequently a much larger proportion of spare flash memory included than a hard drive will normally have, in order to achieve the same life. (Unlike a hard disk, blocks tend to die gradually, so the flash controller can normally detect them getting weak and map to replacement blocks long before any user data is lost.) One departure from a hard disk is that flash blocks wear out according to how much they''re used. Most filesystems have blocks in some positions which are used much more than others (e.g. superblocks, uberblocks, etc), and these are normally really critical to the filesystem. Designers of SSDs know that it would be completely unacceptable for such critical blocks to fail quickly -- that would in effect mean the SSD had a very short life, although most of it would still be fine when it became useless. To counteract this, the on-board SSD controller implements a feature called wear leveling. What this does is to move the logical block numbers around on the physical flash blocks, so that all the blocks wear at the same rate. So you can sit there continually rewriting block 0, and you won''t wear out the first flash block, as the controller will move around where it stores block 0 in flash so all the flash memory wears at the same rate, and you get longest possible life from the SSD. When you''ve considered these (and doubtless other) issues, it should become clear why flash memory of the type we currently have available makes good sense to build it into something resembling a disk. It really looks nothing like DRAM memory. I''m sure that in time new flash technologies will appear, and it may make sense to build them presenting different interfaces. -- Andrew
On Sat, Apr 18, 2009 at 10:38 PM, Andrew Gabriel <agabriel at opensolaris.org> wrote:> Bob Friesenhahn wrote: >> >> On Sat, 18 Apr 2009, Eric D. Mudama wrote: >>> >>> What is tall about the SATA stack? ?There''s not THAT much overhead in >>> SATA, and there''s no reason you would need to support any legacy >>> transfer modes or commands you weren''t interested in. >> >> If SATA is much more than a memcpy() then it is excessive overhead for a >> memory-oriented device. ?In fact, since the "device" is actually comprised >> of quite a few independent memory modules, it should be possible to schedule >> I/O for each independent memory module in parallel. ?A large storage system >> will be comprised of tens, hundreds or even thousands of independent memory >> modules so it does not make sense to serialize access via legacy protocols. >> ?The larger the storage device, the more it suffers from a serial protocol. > > It''s a mistake to think that flash looks similar to RAM. It doesn''t in lots > of ways -- actually it looks more similar to a hard disk in many respects;-)That''s true, but flash isn''t hard disk either. Flash is flash and I believe poster meant exposing it for the OS to consume. This way OS can grow and use generic Flash Translation Layer for wear levelling and block remapping and filesystem could use flash features directly. This way for example TRIM commands could be implemented in this FTL layer anfd won''t be hidden in proprietary firmware. The less magic, blackbox firmwares and more open source code, the better. If I am not clear, here is longer article on this topic: http://lwn.net/Articles/276025/ -- Tomasz Torcz xmpp: zdzichubg at chrome.pl
Tomasz Torcz wrote: On Sat, Apr 18, 2009 at 10:38 PM, Andrew Gabriel wrote: Bob Friesenhahn wrote: On Sat, 18 Apr 2009, Eric D. Mudama wrote: What is tall about the SATA stack? There''s not THAT much overhead in SATA, and there''s no reason you would need to support any legacy transfer modes or commands you weren''t interested in. If SATA is much more than a memcpy() then it is excessive overhead for a memory-oriented device. In fact, since the "device" is actually comprised of quite a few independent memory modules, it should be possible to schedule I/O for each independent memory module in parallel. A large storage system will be comprised of tens, hundreds or even thousands of independent memory modules so it does not make sense to serialize access via legacy protocols. The larger the storage device, the more it suffers from a serial protocol. It''s a mistake to think that flash looks similar to RAM. It doesn''t in lots of ways -- actually it looks more similar to a hard disk in many respects;-) That''s true, but flash isn''t hard disk either. Flash is flash and I believe poster meant exposing it for the OS to consume. This way OS can grow and use generic Flash Translation Layer for wear levelling and block remapping and filesystem could use flash features directly. This way for example TRIM commands could be implemented in this FTL layer anfd won''t be hidden in proprietary firmware. The less magic, blackbox firmwares and more open source code, the better. If I am not clear, here is longer article on this topic: http://lwn.net/Articles/276025/ I''m somewhat skeptical about designing for the exact nature of today''s flash nand technology too far up the stack. You''ll probably find your efforts have been obsoleted by changes in the underlying flash technology before you even get as far as debugging your efforts. There''s a big effort across the industry to advance flash technology in many directions, and the limitations of todays flash technology are probably just transitory en route, which I expect will move on quickly. -- Andrew _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Andrew Gabriel wrote:> Bob Friesenhahn wrote: >> On Sat, 18 Apr 2009, Eric D. Mudama wrote: >>> >>> What is tall about the SATA stack? There''s not THAT much overhead in >>> SATA, and there''s no reason you would need to support any legacy >>> transfer modes or commands you weren''t interested in. >> >> If SATA is much more than a memcpy() then it is excessive overhead >> for a memory-oriented device. In fact, since the "device" is >> actually comprised of quite a few independent memory modules, it >> should be possible to schedule I/O for each independent memory module >> in parallel. A large storage system will be comprised of tens, >> hundreds or even thousands of independent memory modules so it does >> not make sense to serialize access via legacy protocols. The larger >> the storage device, the more it suffers from a serial protocol. > > It''s a mistake to think that flash looks similar to RAM. It doesn''t in > lots of ways -- actually it looks more similar to a hard disk in many > respects;-)I was being careful when I said "nonvolatile memory." There are more choices than flash. Or, if you want to become a zillionaire, invent the perfect non-volatile memory :-) -- richard
On Sat, 18 Apr 2009, Richard Elling wrote:>> >> It''s a mistake to think that flash looks similar to RAM. It doesn''t in lots >> of ways -- actually it looks more similar to a hard disk in many >> respects;-)I already knew (and agree with) everything that Andrew Gabriel said, except for the implication that tomorrow''s software can not accomplish much of what firmware does today. Using an existing disk drive interface is expedient and gets products to market quickly, but it is time to start thinking of a new interface which is better for solid-state storage. Solid-state storage typically has dramatically lower latencies so the stacks optimized for pokey disk drives are not a good match.> I was being careful when I said "nonvolatile memory." There are more > choices than flash. Or, if you want to become a zillionaire, invent the > perfect non-volatile memory :-)Perhaps based on the 4th semiconductor type, the memristor. The semiconductor they forgot to tell you about in school. See "http://en.wikipedia.org/wiki/Memristor". According to the article at http://www.spectrum.ieee.org/may08/6207, "A memory based on memristors could be 1000 times faster than magnetic disks and use much less power". True out of the box thinking. Flash devices have been in use since the ''80s, yet only now are we starting to use them for mass storage in general purpose computers. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
>>>>> "re" == Richard Elling <richard.elling at gmail.com> writes:re> The win is nonvolatile main memory. When we get this on a re> large, fast scale (and it will happen in our lifetime :-) then re> we can begin to forget about file systems, with an interim re> step through ramdisks. yeah I still think an unrealized and very cheap winning design would be to transform part of the main SDRAM into novolatile memory by adding a hardware watchdog device that backs it up to FLASH (and writing a software driver to handle the restore on boot). It''s not hot-pluggable, but at least it''s pluggable: with the power off you could move the flash module from one machine to another. The companies that write motherboard BIOS seem way too incompetent to manage this reliably enough to be useful, but for some high-end botique server maybe one could pull it off. but as for not needing filesystems, I can''t imagine it. The smalltalk zealots like to talk about their persistence layer, but when you ask them, ``how do you upgrade the software while keeping the old data?'''' they hem and haw and say ``it''s not really *THAT* bad,'''' but I suspect the reason DabbleDB is only available hosted isn''t just revenue model---I bet they couldn''t safely have customers doing their own software upgrades. I bet those guys do all upgrades with the Senior Wizard present and the debugger attached, and use convuluted schemes of snapshots and parallel development environments to supervise the whole delicate cutover. We''ll still need snapshots, clones, backup/replication tools, ACL''s MAC-labels zones, byterange locking, recovery/verification tools for spotting bugs in the NeoFilesystem code itself, u.s.w. I think the tree-of-bytestreams metaphor might end up enduring, but squeezing maximal performance out of a novolatile device that one can access without a disk driver, sometimes without even a syscall, will need new userland API''s, a thick subtle library that can cooperate with other untrusted copies of itself, and a strong focus on ``zero copy''''. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090420/668f1572/attachment.bin>