Folks, As you may have heard, NetApp has a lawsuit against Sun in 2007 (and now carried over to Oracle) for patent infringement with the zfs file system. Now, NetApp is taking a stronger stance and threatening zfs storage suppliers to stop selling zfs-based storage. http://www.theregister.co.uk/2010/07/06/netapp_coraid/?utm_source=feedburner&utm_medium=twitter&utm_campaign=Feed%3A+shovelarts+%28Shovel+Arts%29 Given this, I am wondering what you think is the future of zfs as an open source project. Regards, Peter -- This message posted from opensolaris.org
On 7/7/2010 6:33 PM, Peter Taps wrote:> Folks, > > As you may have heard, NetApp has a lawsuit against Sun in 2007 (and now carried over to Oracle) for patent infringement with the zfs file system. Now, NetApp is taking a stronger stance and threatening zfs storage suppliers to stop selling zfs-based storage. > > http://www.theregister.co.uk/2010/07/06/netapp_coraid/?utm_source=feedburner&utm_medium=twitter&utm_campaign=Feed%3A+shovelarts+%28Shovel+Arts%29 > > Given this, I am wondering what you think is the future of zfs as an open source project. > > Regards, > Peter >Go take a look at the archives for this list. It''s been discussed before. NetApps'' relevant patents were recently declared void (Joerg Schilling''s work [amongst others] predates the patents by almost a decade). NetApp is appealing the decision, but I can''t see how they''ll win. Fundamentally, NetApp''s desperate. The squeeze on folks like CoRaid and Nexenta is a shakedown, pure and simple. They''re hoping to get cash from others before their suit is thrown out completely. Oracle certainly isn''t going to stop development for one of their prize technologies on the remote possibility that NetApp prevails. Which, as time goes on, is smaller, and smaller. So long as Oracle continues to do development, I see no reason for a change in the Open Source nature of ZFS (i.e. it matters not to the patent suit that ZFS is Open or Closed). Note: I do not speak for Oracle here in any way, nor have any privileged knowledge of the suit. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On Wed, 2010-07-07 at 18:52 -0700, Erik Trimble wrote:> On 7/7/2010 6:33 PM, Peter Taps wrote: > > Folks, > > > > As you may have heard, NetApp has a lawsuit against Sun in 2007 (and now carried over to Oracle) for patent infringement with the zfs file system. Now, NetApp is taking a stronger stance and threatening zfs storage suppliers to stop selling zfs-based storage. > > > > http://www.theregister.co.uk/2010/07/06/netapp_coraid/?utm_source=feedburner&utm_medium=twitter&utm_campaign=Feed%3A+shovelarts+%28Shovel+Arts%29 > > > > Given this, I am wondering what you think is the future of zfs as an open source project. > > > > Regards, > > Peter > > > > Go take a look at the archives for this list. It''s been discussed before. > > NetApps'' relevant patents were recently declared void (Joerg Schilling''s > work [amongst others] predates the patents by almost a decade). NetApp > is appealing the decision, but I can''t see how they''ll win. > > Fundamentally, NetApp''s desperate. The squeeze on folks like CoRaid and > Nexenta is a shakedown, pure and simple. They''re hoping to get cash from > others before their suit is thrown out completely.Nexenta has not been hit as far as I know. Some companies shipping products based on NexentaStor have been threatened, but I''m not sure its gone anywhere yet. This situation is why I''m coming to believe that there is almost no case for software patents. (I still think there may be a few exceptions -- the RSA patent being a good example where there was significant enough innovation to possibly justify a patent). The sad fact is that when a company feels it can''t compete on the merits of innovation or cost, it seeks to litigate the competition. What NetApp *should* be doing is figuring out how to out-innovate us, undercut us ("us" collectively meaning Oracle and all other ZFS users), or find other ways to compete effectively. They can''t, so they resort to litigation. Sounds like a certain operation from Santa Cruz, doesn''t it?> > Oracle certainly isn''t going to stop development for one of their prize > technologies on the remote possibility that NetApp prevails. Which, as > time goes on, is smaller, and smaller. So long as Oracle continues to > do development, I see no reason for a change in the Open Source nature > of ZFS (i.e. it matters not to the patent suit that ZFS is Open or Closed). > > > Note: I do not speak for Oracle here in any way, nor have any privileged > knowledge of the suit.Fundamentally, I agree. I don''t think its too likely that Oracle will settle for any situation which gives NetApp any leverage, particularly given the weak position NetApp is in. There could always be a surprise, but right now my money is on NetApp''s suit failing. (And I''ve put my money where my mouth is on this issue -- as a recent Nexenta hire leaving behind a stable position at Oracle, I''m confident that we''re in good shape.) -- Garrett
"Garrett D''Amore" <garrett at nexenta.com> wrote:> This situation is why I''m coming to believe that there is almost no case > for software patents. (I still think there may be a few exceptions -- > the RSA patent being a good example where there was significant enough > innovation to possibly justify a patent). The sad fact is that when aRSA never has been a patent in Europe as it was files after the decription was published ;-)> company feels it can''t compete on the merits of innovation or cost, it > seeks to litigate the competition. What NetApp *should* be doing is > figuring out how to out-innovate us, undercut us ("us" collectively > meaning Oracle and all other ZFS users), or find other ways to compete > effectively. They can''t, so they resort to litigation. Sounds like aPatent claims in this area are usually a result of missing competitive products at the side of the plaintiff. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Peter Taps > > As you may have heard, NetApp has a lawsuit against Sun in 2007 (and > now carried over to Oracle) for patent infringement with the zfs file > > Given this, I am wondering what you think is the future of zfs as an > open source project.Others have already stated "Oracle won the case" in better detail than I could. So ZFS is safe in solaris/opensolaris. But some other big names (Apple) have backed down from deploying ZFS, presumably due to threats, and some others (coraid) are being sued anyway. This does reduce the number of ZFS deployments in the world, so it''s probably benefitting Netapp to keep the suit alive, even if they never collect a dollar. But surprisingly, I think it''s also benefitting Oracle. The lack of ZFS competition certainly helps Oracle sell sun hardware and solaris support contracts. As strongly as I feel "apple enterprise" is an oxymoron, and apple servers contribute negative value to an infrastructure, I do know a lot of people who buy / have bought them. And I think that number would be higher, if Apple were shipping ZFS. So, IMHO: COW lawsuit: Good for NetApp. Good for Oracle and solaris. Bad for ZFS like a rainy day is bad for baseball. And bad for everybody else.
On Thu, 8 Jul 2010, Edward Ned Harvey wrote:> apple servers contribute negative value to an infrastructure, I do know a > lot of people who buy / have bought them. And I think that number would be > higher, if Apple were shipping ZFS.Yep. Provided it supported ZFS, a Mac Mini makes for a compelling SOHO server. The lack of ZFS is the main thing holding me back here... -- Rich Teer, Publisher Vinylphile Magazine www.vinylphilemag.com
> On Thu, 8 Jul 2010, Edward Ned Harvey wrote: > Yep. Provided it supported ZFS, a Mac Mini makes for > a compelling SOHO server.Warning: a Mac Mini does not have eSATA ports for external storage. It''s dangerous to use USB for external storage since many (most? all?) USB->SATA chips discard SYNC instead of passing FLUSH to the drive - very bad for ZFS. Dell''s Zino HD is a better choice - it has two eSATA ports (port multiplier capable). -- This message posted from opensolaris.org
On 9 Jul 2010, at 08:55, James Van Artsdalen <james-opensolaris at jrv.org> wrote:>> On Thu, 8 Jul 2010, Edward Ned Harvey wrote: >> Yep. Provided it supported ZFS, a Mac Mini makes for >> a compelling SOHO server. > > Warning: a Mac Mini does not have eSATA ports for external storage. It''s dangerous to use USB for external storage since many (most? all?) USB->SATA chips discard SYNC instead of passing FLUSH to the drive - very bad for ZFS.All Mac Minis have FireWire - the new ones have FW800. In any case, the server class mini has two internal hard drives which make them amenable to mirroring. The Mac ZFS port limps on in any case - though I''ve not managed to spend much time on it recently, I have been making progress this week. The Google code project is at http://code.google.com/p/maczfs/ and my Github is at http://github.com/alblue/ for those that are interested. Alex
> From: Rich Teer [mailto:rich.teer at rite-group.com] > Sent: Thursday, July 08, 2010 7:43 PM > > Yep. Provided it supported ZFS, a Mac Mini makes for a compelling SOHO > server. The lack of ZFS is the main thing holding me back here...I don''t really want to go into much detail here (it''s a zfs list, not an anti-apple list) but in my personal experience, OSX server is simply not a stable OS. Even with all the patches installed, and a light workload, my dumb leopard server keeps doing really dumb things like ... failing to start the dhcp service, or spontaneously losing its password database, or failing to release a time machine image when a client goes offline... thus necessitating a server reboot before the client can use time machine again ... I generally reboot my XServe once per week, whereas my linux, windows, and solaris servers only need reboots for hardware issues or certain system updates. Quarterly. After several iterations, we finally disabled all OSX services except time machine. If you happen to like mac minis for their *hardware* >cough< instead of their software (osx server), you could always install osol, or freebsd, or linux or something on that machine instead. I like mac laptops, but their "server" and enterprise offerings are beyond pathetic.
>>>>> "ab" == Alex Blewitt <alex.blewitt at gmail.com> writes:ab> All Mac Minis have FireWire - the new ones have FW800. I tried attaching just two disks to a ZFS host using firewire, and it worked very badly for me. I found: 1. The solaris firewire stack isn''t as good as the Mac OS one. 2. Solaris is very obnoxious about drives it regards as ``removeable''''. There are ``hot-swappable'''' drives that are not considered removeable but can be removed about as easily, that are maybe handled less obnoxiously. Firewire''s removeable while SAS/SATA are hot-swappable. 3. The quality of software inside the firewire cases varies wildly and is a big source of stability problems. (even on mac) The companies behind the software are sketchy and weak, while only a few large cartels make SAS expanders for example. Also, the price of these cases is ridiculously high compared to SATA world. If you go there you may as well take your wad next door and get SAS. 4. The translation between firewire and SATA is not a simple one, and is not transparent to ''smartctl'' commands, or other werid things like hard disk firmware upgraders. though I guess the same is true of the lsi controllers under solaris. This problem''s rampant unfortunately. 5. Firewire is slow. too slow to make 2x speed interesting. and the host chips are not that advanced so they use a lot of CPU. 6. The DTL partial-mirror-resilver doesn''t work. With b130 it still doesn''t work. After half a mirror goes away and comes back, scrubs always reveal CKSUM errors on the half that went away. With b71 I foudn if I meticulously ''zpool offline''d the disks before taking them away, the CKSUM errors didn''t happen. With b130 that no longer helps. so, scratchy unreliable connections are just unworkable. Even iSCSI is not great, but firewire cases sprawled all over a desk with trippable scratchy cables is just not on. It''s better to have larger cases that can be mounted in a rack, or if not that, at least cases that are heavier and fewer in number and fewer in cordage. suggest that you do not waste time with firewire. SATA, SAS, or fuckoff. None of this is an insult to your blingy designer apple iShit. It applies equally well to any hardware involving lots of tiny firewire cases. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100709/10aac93e/attachment.bin>
On Fri, 2010-07-09 at 15:02 -0400, Miles Nordin wrote:> >>>>> "ab" == Alex Blewitt <alex.blewitt at gmail.com> writes: > > ab> All Mac Minis have FireWire - the new ones have FW800. > > I tried attaching just two disks to a ZFS host using firewire, and it > worked very badly for me. I found: > > 1. The solaris firewire stack isn''t as good as the Mac OS one.Indeed. There has been some improvement here in the past year or two, but I still wouldn''t deem it ready for serious production work.> > 2. Solaris is very obnoxious about drives it regards as > ``removeable''''. There are ``hot-swappable'''' drives that are not > considered removeable but can be removed about as easily, that are > maybe handled less obnoxiously. Firewire''s removeable while > SAS/SATA are hot-swappable.Actually, most of the "removable" and "hotpluggable" devices have the same handling. But SAS/SATA HBAs rarely identify their devices as hotpluggable, even though they are. There are other issues you hit as a result here. We''re approaching the state where all media are hotpluggable with the exception of legacy PATA and parallel SCSI. And those are becoming rarer and rarer. (Granted many hardware chassis don''t support hotplug of internal SATA drives, but that''s an attribute of the chassis.)> > 3. The quality of software inside the firewire cases varies wildly > and is a big source of stability problems. (even on mac) The > companies behind the software are sketchy and weak, while only a > few large cartels make SAS expanders for example. Also, the price > of these cases is ridiculously high compared to SATA world. If > you go there you may as well take your wad next door and get SAS.I''d be highly concerned about whether 1394 adapters did cache flush correctly. -- Garrett
On 9 Jul 2010, at 20:38, Garrett D''Amore wrote:> On Fri, 2010-07-09 at 15:02 -0400, Miles Nordin wrote: >>>>>>> "ab" == Alex Blewitt <alex.blewitt at gmail.com> writes: >> >> ab> All Mac Minis have FireWire - the new ones have FW800. >> >> I tried attaching just two disks to a ZFS host using firewire, and it >> worked very badly for me. I found: >> >> 1. The solaris firewire stack isn''t as good as the Mac OS one. > > Indeed. There has been some improvement here in the past year or two, > but I still wouldn''t deem it ready for serious production work.That may be true for Solaris; but not so for Mac OS X. And after all, that''s what I''m working to get ZFS on.>> 3. The quality of software inside the firewire cases varies wildly >> and is a big source of stability problems. (even on mac)It would be good if you could refrain from spreading FUD if you don''t have experience with it. I have used FW400 and FW800 on Mac systems for the last 8 years; the only problem was with the Oxford 911 chipset in OSX 10.1 days. Since then, I''ve not experienced any issues to do with the bus itself. It may not suit everyone''s needs, and it may not be supported well on OpenSolaris, but it works fine on a Mac. Alex
Folks, I would appreciate it if you can create a separate thread for Mac Mini. Back to the original subject. NetApp has deep pockets. A few companies have already backed out of zfs as they cannot afford to go through a lawsuit. I am in a stealth startup company and we rely on zfs for our application. The future of our company, and many other businesses, depends on what happens to zfs. If you are in a similar boat, what actions are you planning? Regards, Peter -- This message posted from opensolaris.org
On 7/9/2010 2:55 PM, Peter Taps wrote:> Folks, > > I would appreciate it if you can create a separate thread for Mac Mini. > > Back to the original subject. > > NetApp has deep pockets. A few companies have already backed out of zfs as they cannot afford to go through a lawsuit. I am in a stealth startup company and we rely on zfs for our application. The future of our company, and many other businesses, depends on what happens to zfs. If you are in a similar boat, what actions are you planning? > > Regards, > Peter >Congratulations. You''ve tied your boat to a system which has legal issues. Welcome to the Valley. Part of being a successful startup is having a flexible business plan, which includes a hard look at the possibility that core technologies you depend on may no longer be available to you, for whatever reason. Risk analysis is something that any good business *should* include as a part of their strategic view (you do have periodic strategic reviews, right?) If you''re planning on developing some sort of storage appliance, and depend on OpenSolaris or FreeBSD w/ ZFS, well, pick another filesystem. It''s pretty much that simple. Painful, but simple - each of the other filesystems has well known weaknesses and strengths, so it shouldn''t be a big issue to pick the right one for you (even if it''s not just like ZFS ). Of course, the smart thing to do is get this strategy in place now, but wait to execute it until it becomes necessary (i.e. ZFS can''t be used anymore). If you''re writing a ZFS-dependent application (backup?) well, then, you''re up the creek. You have no alternate, since you''ve bet the farm on ZFS. Good news is that it''s unlikely that NetApp will win, and if it does look like they''ll win, I would bet huge chunks of money that Oracle cross-licenses the patents or pays for a license, rather than kill ZFS (it simply makes too much money for Oracle to abandon). IANAL, but I''d strongly advise against trying to get a license from NetApp, should they come calling for blood money. My personal feeling is that it''s better to bet the startup''s future on not needing the license, than on forking over a substantial portion of your revenue for what most likely will be unnecessary. But it''s up to your financial backers - in the end, it''s a gamble. But so are all startups, and trading away significant revenue for dubious "safety" isn''t good sign that you startup will succeed in the long-haul. I''d strongly suggest trying to stay off of NetApp''s radar for now, as they''re in the mode of a shakedown bully while they still have leverage. If you do get a call from NetApp, go see an IP lawyer right away. They should give you strategies which you can use to stall the progress of any actual lawsuit until the NetApp/Oracle one is finished. And, even now, that strategy is likely less costly than one involving forking over a portion of your revenue to NetApp for a considerable time. Do remember: Oracle has much deeper pockets than NetApp, and much less incentive to settle. None of the preceding should infer that I speak for Oracle, Inc, nor do I have any special knowledge of the progress of the NetApp v Oracle lawsuit. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
>>>>> "ab" == Alex Blewitt <alex.blewitt at gmail.com> writes:>>> 3. The quality of software inside the firewire cases varies >>> wildly and is a big source of stability problems. (even on >>> mac) ab> It would be good if you could refrain from spreading FUD if ab> you don''t have experience with it. yup, my experience was with the Prolific PL-3705 chip, which was very popular for a while. it has two problems: * it doesn''t auto-pick its ``ID number'''' or ``address'''' or something, so if you have two cases with this chip on the same bus, they won''t work. go google it! * it crashes. as in, I reboot the computer but not the case, and the drive won''t mount. I reboot the case but not the computer, and the drive starts working again. http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html I even upgraded the firmware to give the chinese another shot. still broken. You can easily google for other problems with firewire cases in general. The performance of the overall system is all over the place depending on the bridge chip you use. Some of them have problems with ``large'''' transactions as well. Some of them lose their shit when the drive reports bad sectors, instead of passing the error along so you can usefully diagnose it---not that they''re the only devices with awful exception handling in this area, but why add one more mystery? I think it was already clear I had experience from the level of detail in the other items I mentioned, though, wasn''t it? Add also to all of it the cache flush suspicions from Garrett: these bridge chips have full-on ARM cores inside them and lots of buffers, which is something SAS multipliers don''t have AIUI. Yeah, in a way that''s slightly FUDdy but not really since IIRC the write cache problem has been verified at least on some USB cases, hasn''t it? Also since the testing procedure for cache flush problems is a little....ad-hoc, and a lot of people are therefore putting hardware to work without testing cache flush at all, I think it makes perfect sense to replace suspicious components with lengths of dumb wire where possible even if the suspicions aren''t proved. ab> I have used FW400 and FW800 on Mac systems for the last 8 ab> years; the only problem was with the Oxford 911 chipset in OSX ab> 10.1 days. yeah, well, if you don''t want to listen, then fine, don''t listen. ab> It may not suit everyone''s needs, and it may not be supported ab> well on OpenSolaris, but it works fine on a Mac. aside from being slow unstable and expensive, yeah it works fine on Mac. But you don''t really have the eSATA option on the mac unless you pay double for the ``pro'''' desktop, so i can see why you''d defend your only choice of disk if you''ve already committed to apple. Does the Mac OS even have an interesting zfs port? Remind me why we are discussing this, again? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100710/624fd19e/attachment.bin>
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Peter Taps > > A few companies have already backed out of zfs > as they cannot afford to go through a lawsuit.Or, in the case of Apple, who could definitely afford a lawsuit, but choose to avoid it anyway.> I am in a stealth > startup company and we rely on zfs for our application. The future of > our company, and many other businesses, depends on what happens to zfs.For a lot of purposes, ZFS is the clear best solution. But maybe you''re not necessarily in one of those situations? Perhaps you could use Microsoft VSS, or Linux BTRFS? ''Course, by all rights, those are copy-on-write too. So why doesn''t netapp have a lawsuit against kernel.org, or microsoft? Maybe cuz they just know they''ll damage their own business too much by suing Linus, and they can''t afford to go up against MS. I guess.
On Jul 10, 2010, at 14:20, Edward Ned Harvey wrote:>> A few companies have already backed out of zfs >> as they cannot afford to go through a lawsuit. > > Or, in the case of Apple, who could definitely afford a lawsuit, but > choose > to avoid it anyway.This was covered already: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-October/033125.html
On Sat, Jul 10, 2010 at 1:20 PM, Edward Ned Harvey <solaris2 at nedharvey.com>wrote:> > From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > > bounces at opensolaris.org] On Behalf Of Peter Taps > > > > A few companies have already backed out of zfs > > as they cannot afford to go through a lawsuit. > > Or, in the case of Apple, who could definitely afford a lawsuit, but choose > to avoid it anyway. > > > > I am in a stealth > > startup company and we rely on zfs for our application. The future of > > our company, and many other businesses, depends on what happens to zfs. > > For a lot of purposes, ZFS is the clear best solution. But maybe you''re > not > necessarily in one of those situations? Perhaps you could use Microsoft > VSS, or Linux BTRFS? > > ''Course, by all rights, those are copy-on-write too. So why doesn''t netapp > have a lawsuit against kernel.org, or microsoft? Maybe cuz they just know > they''ll damage their own business too much by suing Linus, and they can''t > afford to go up against MS. I guess. > >Because VSS isn''t doing anything remotely close to what WAFL is doing when it takes snapshots. I haven''t spent much time looking at the exact BTRFS implementation, but I''d imagine the fact its on-disk format isn''t "finalized" (last I heard) would make it a bit pre-mature to file a lawsuit. I''m sure they''re actively watching it as well. Furthermore, I''m sure the fact one of the core zfs developers, Matt Ahrens, previously interned for the filesystem group at NetApp had just a *BIT* to do with the lawsuit. From their perspective, it''s just a bit too convenient someone gets access to the crown jewels, then runs off to a new company and creates a filesystem that looks and feels so similar. Of course, taking stabs in the dark on this mailing list without having access to all of the court documents isn''t really constructive in the first place. Then again, neither are people trying to claim they have a solid understanding of the validity of the lawsuit(s), on this mailing list, who aren''t IP lawyers. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100711/d332fe11/attachment.html>
> From: David Magda [mailto:dmagda at ee.ryerson.ca] > > On Jul 10, 2010, at 14:20, Edward Ned Harvey wrote: > > >> A few companies have already backed out of zfs > >> as they cannot afford to go through a lawsuit. > > > > Or, in the case of Apple, who could definitely afford a lawsuit, but > > choose > > to avoid it anyway. > > This was covered already: > > http://mail.opensolaris.org/pipermail/zfs-discuss/2009- > October/033125.htmlPrecisely. A private license, with support and indemnification from Sun, would shield Apple from any lawsuit from Netapp.
> From: Tim Cook [mailto:tim at cook.ms] > > Because VSS isn''t doing anything remotely close to what WAFL is doing > when it takes snapshots.It may not do what you want it to do, but it''s still copy on write, as evidenced by the fact that it takes instantaneous snapshots, and snapshots don''t get overwritten when new data is written. I wouldn''t call that "not even remotely close." It''s different, but definitely the same ballpark.
On Mon, Jul 12, 2010 at 8:32 AM, Edward Ned Harvey <solaris2 at nedharvey.com>wrote:> > From: Tim Cook [mailto:tim at cook.ms] > > > > Because VSS isn''t doing anything remotely close to what WAFL is doing > > when it takes snapshots. > > It may not do what you want it to do, but it''s still copy on write, as > evidenced by the fact that it takes instantaneous snapshots, and snapshots > don''t get overwritten when new data is written. > > I wouldn''t call that "not even remotely close." It''s different, but > definitely the same ballpark. > >Everyone''s SNAPSHOTS are copy on write BESIDES ZFS and WAFL''s. The filesystem itself is copy-on-write for NetApp/Oracle, which is why there is no performance degradation when you take them. Per Microsoft: When a change to the original volume occurs, but before it is written to disk, the block about to be modified is read and then written to a ?differences area?, which preserves a copy of the data block before it is overwritten with the change. That is exactly how pretty much everyone else takes snapshots in the industry, and exactly why nobody can keep more than a handful on disk at any one time, and sometimes not even that for data that has heavy change rates. It''s not in the same ballpark, it''s a completely different implementation. It''s about as similar as a gas and diesel engine. They might both go in cars, they might both move the car. They aren''t remotely close to each other from a design perspective. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100712/29c74b9c/attachment.html>
On Mon, July 12, 2010 10:03, Tim Cook wrote:> Everyone''s SNAPSHOTS are copy on write BESIDES ZFS and WAFL''s. The > filesystem itself is copy-on-write for NetApp/Oracle, which is why there > is no performance degradation when you take them. > > Per Microsoft: > When a change to the original volume occurs, but before it is written to > disk, the block about to be modified is read and then written to a > ?differences area?, which preserves a copy of the data block before it is > overwritten with the change. > > That is exactly how pretty much everyone else takes snapshots in the > industry, and exactly why nobody can keep more than a handful on disk at > any one time, and sometimes not even that for data that has heavy change > rates.The nice thing about VSS is that they can be requested by applications. Though ZFS is ACID, and you can design an application to have ACID writes to disk, linking the two can be tricky. And not all applications are ACID (image editors, word processors, etc.). It''d be handy to have a mechanism where applications could register for snapshot notifications. When one is about to happen, they could be told about it and do what they need to do. Once all the applications have acknowledged the snapshot alert--and/or after a pre-set timeout--the file system would create the snapshot, and then notify the applications that it''s done. Given that snapshots will probably be more popular in the future (WAFL NFS/LUNs, ZFS, Btrfs, VMware disk image snapshots, etc.), an agreed upon consensus would be handy (D-Bus? POSIX?).
On 7/12/2010 8:13 AM, David Magda wrote:> On Mon, July 12, 2010 10:03, Tim Cook wrote: > > >> Everyone''s SNAPSHOTS are copy on write BESIDES ZFS and WAFL''s. The >> filesystem itself is copy-on-write for NetApp/Oracle, which is why there >> is no performance degradation when you take them. >> >> Per Microsoft: >> When a change to the original volume occurs, but before it is written to >> disk, the block about to be modified is read and then written to a >> ?differences area?, which preserves a copy of the data block before it is >> overwritten with the change. >> >> That is exactly how pretty much everyone else takes snapshots in the >> industry, and exactly why nobody can keep more than a handful on disk at >> any one time, and sometimes not even that for data that has heavy change >> rates. >> > The nice thing about VSS is that they can be requested by applications. > Though ZFS is ACID, and you can design an application to have ACID writes > to disk, linking the two can be tricky. And not all applications are ACID > (image editors, word processors, etc.). >ZFS is NOT automatically ACID. There is no guaranty of commits for async write operations. You would have to use synchronous writes to guaranty commits. And, furthermore, I think that there is a strong probability that ZFS won''t pass other aspects of ACID. Despite what certain folks have been saying for awhile (*cough* Oracle *cough* Microsoft *cough*), the filesystem is NOT a relational database. They have very distinctly different design criteria. You can also easily have applications request a ZFS snapshot, though not specifically through an API right now.> It''d be handy to have a mechanism where applications could register for > snapshot notifications. When one is about to happen, they could be told > about it and do what they need to do. Once all the applications have > acknowledged the snapshot alert--and/or after a pre-set timeout--the file > system would create the snapshot, and then notify the applications that > it''s done. >Why would an application need to be notified? I think you''re under the misconception that something happens when a ZFS snapshot is taken. NOTHING happens when a snapshot is taken (OK, well, there is the snapshot reference name created). Blocks aren''t moved around, we don''t copy anything, etc. Applications have no need to "do anything" before a snapshot it taken.> Given that snapshots will probably be more popular in the future (WAFL > NFS/LUNs, ZFS, Btrfs, VMware disk image snapshots, etc.), an agreed upon > consensus would be handy (D-Bus? POSIX?). > >-- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Erik Trimble wrote:> it does look like they''ll win, I would bet huge chunks of money that > Oracle cross-licenses the patents or pays for a license, rather than > kill ZFS (it simply makes too much money for Oracle to abandon).Out of sheer curiosity - and I''m not disagreeing with you, just wondering - how does ZFS make money for Oracle when they don''t charge for it? Do you think it''s such an important feature that it''s a big factor in customers picking Solaris over other platforms? ---------- Learn more about Merchant Link at www.merchantlink.com. THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are proprietary and confidential information intended only for the use of the recipient(s) named above. If you are not the intended recipient, you may not print, distribute, or copy this message or any attachments. If you have received this communication in error, please notify the sender by return e-mail and delete this message and any attachments from your computer.
On 7/12/2010 8:49 AM, Linder, Doug wrote:> Erik Trimble wrote: > > >> it does look like they''ll win, I would bet huge chunks of money that >> Oracle cross-licenses the patents or pays for a license, rather than >> kill ZFS (it simply makes too much money for Oracle to abandon). >> > Out of sheer curiosity - and I''m not disagreeing with you, just wondering - how does ZFS make money for Oracle when they don''t charge for it? Do you think it''s such an important feature that it''s a big factor in customers picking Solaris over other platforms? >It''s a core part of the Storage 7000-series appliances. They would be significantly less appealing without ZFS. And, yes, it *is* a huge selling point for Solaris. Solaris/ZFS is a decisive factor in much of Oracle''s storage server sales. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Linder, Doug wrote:> Out of sheer curiosity - and I''m not disagreeing with you, just wondering - how does ZFS make money for Oracle when they don''t charge for it? Do you think it''s such an important feature that it''s a big factor in customers picking Solaris over other platforms? >Yes, it is one of many significant factors in customers choosing Solaris over other OS''s. Having chosen Solaris, customers then tend to buy Sun/Oracle systems to run it on. Of course, there are the 7000 series products too, which are heavily based on the capabilities of ZFS, amongst other Solaris features. -- Andrew Gabriel
On Mon, 2010-07-12 at 17:05 +0100, Andrew Gabriel wrote:> Linder, Doug wrote: > > Out of sheer curiosity - and I''m not disagreeing with you, just wondering - how does ZFS make money for Oracle when they don''t charge for it? Do you think it''s such an important feature that it''s a big factor in customers picking Solaris over other platforms? > > > > Yes, it is one of many significant factors in customers choosing Solaris > over other OS''s. > Having chosen Solaris, customers then tend to buy Sun/Oracle systems to > run it on. > > Of course, there are the 7000 series products too, which are heavily > based on the capabilities of ZFS, amongst other Solaris features. >And, the next release of Solaris (whenever it comes out) is supposed to make far more use of zfs for things like its packaging system (upgrades using snapshots, etc.) and zones. Indeed, its possible (I''ve not checked in a long time) that S10 makes of snapshots for live upgrade if root is zfs. ZFS is a key strategic component of Solaris going forward. Having to abandon it would be a heavy blow -- quite possibly (IMO) fatal -- at least to its future with Oracle. - Garrett
On Mon, Jul 12, 2010 at 11:09 AM, Garrett D''Amore <garrett at nexenta.com> wrote:> On Mon, 2010-07-12 at 17:05 +0100, Andrew Gabriel wrote: >> Linder, Doug wrote: >> > Out of sheer curiosity - and I''m not disagreeing with you, just wondering - how does ZFS make money for Oracle when they don''t charge for it? ?Do you think it''s such an important feature that it''s a big factor in customers picking Solaris over other platforms? >> > >> >> Yes, it is one of many significant factors in customers choosing Solaris >> over other OS''s. >> Having chosen Solaris, customers then tend to buy Sun/Oracle systems to >> run it on. >> >> Of course, there are the 7000 series products too, which are heavily >> based on the capabilities of ZFS, amongst other Solaris features. >> > > And, the next release of Solaris (whenever it comes out) is supposed to > make far more use of zfs for things like its packaging system (upgrades > using snapshots, etc.) and zones. ?Indeed, its possible (I''ve not > checked in a long time) that S10 makes of snapshots for live upgrade if > root is zfs.It does.> > ZFS is a key strategic component of Solaris going forward. ?Having to > abandon it would be a heavy blow -- quite possibly (IMO) fatal -- at > least to its future with Oracle. > > ? ? ? ?- Garrett > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On 7/12/2010 9:09 AM, Garrett D''Amore wrote:> And, the next release of Solaris (whenever it comes out) is supposed to > make far more use of zfs for things like its packaging system (upgrades > using snapshots, etc.) and zones. Indeed, its possible (I''ve not > checked in a long time) that S10 makes of snapshots for live upgrade if > root is zfs. > >Solaris 10 LiveUpgrade does indeed currently use ZFS snapshots. It has for at least the last couple of Update release (I want to say it appeared in Update 6, but I can''t remember exactly). I''d have to look, but I don''t think ZFS is *currently* used for the zone scripts, though there''s no barrier for it to be used with (or inside) zones.> ZFS is a key strategic component of Solaris going forward. Having to > abandon it would be a heavy blow -- quite possibly (IMO) fatal -- at > least to its future with Oracle. > > - Garrett >Losing ZFS would indeed be disastrous, as it would leave Solaris with only the Veritas File System (VxFS) as a semi-modern filesystem, and a non-native FS at that (i.e. VxFS is a 3rd-party for-pay FS, which severely inhibits its uptake). UFS is just way to old to be competitive these days. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On Mon, 12 Jul 2010, Edward Ned Harvey wrote:> > Precisely. > > A private license, with support and indemnification from Sun, would > shield Apple from any lawsuit from Netapp.This sort of statement illustrates a lack of knowledge of how indemnification and patents work. The patent holder is not compelled in any way to offer a license for use of the patent. Without a patent license, shipping products can be stopped dead in their tracks. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Mon, Jul 12, 2010 at 05:05:41PM +0100, Andrew Gabriel wrote:> Linder, Doug wrote: > >Out of sheer curiosity - and I''m not disagreeing with you, just wondering > >- how does ZFS make money for Oracle when they don''t charge for it? Do > >you think it''s such an important feature that it''s a big factor in > >customers picking Solaris over other platforms? > > > > Yes, it is one of many significant factors in customers choosing Solaris > over other OS''s. > Having chosen Solaris, customers then tend to buy Sun/Oracle systems to > run it on.2x hit the nail on the head. But only if one doesn''t have to sell its kingdom to get recommended/security patches. Otherwise the windooze nerds take over ... Regards, jel. -- Otto-von-Guericke University http://www.cs.uni-magdeburg.de/ Department of Computer Science Geb. 29 R 027, Universitaetsplatz 2 39106 Magdeburg, Germany Tel: +49 391 67 12768
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Linder, Doug > > Out of sheer curiosity - and I''m not disagreeing with you, just > wondering - how does ZFS make money for Oracle when they don''t charge > for it? Do you think it''s such an important feature that it''s a big > factor in customers picking Solaris over other platforms?ZFS was the sole factor in my decision to buy a Sun server with solaris this year, to replace my netapp. In addition, I bought some dell machines and paid for solaris on those, to keep around as backup destinations for the production sun file server. I absolutely do believe ZFS is a huge selling point for sun hardware and solaris. Especially for file servers.
> From: Bob Friesenhahn [mailto:bfriesen at simple.dallas.tx.us] > > > A private license, with support and indemnification from Sun, would > > shield Apple from any lawsuit from Netapp. > > The patent holder is not compelled > in any way to offer a license for use of the patent. Without a patent > license, shipping products can be stopped dead in their tracks.It may be true, that Netapp could stop apple from shipping OSX, if Apple had ZFS in OSX, and Netapp won the lawsuit. But there was a time when it was absolutely possible for Sun & Apple to reach an agreement which would limit Apple''s liability in the event of lawsuit waged against them. CDDL contains an explicit disclaimer of warranty, which means, if Apple were to download CDDL ZFS source code and compile and distribute it themselves, they would be fully liable for any lawsuit waged against them. But CDDL also allows for Sun to distribute ZFS binaries under a different license, in which Sun could have assumed responsibility for losses, in the event Apple were to be sued.
On Tue, 2010-07-13 at 10:51 -0400, Edward Ned Harvey wrote:> > From: Bob Friesenhahn [mailto:bfriesen at simple.dallas.tx.us] > > > > > A private license, with support and indemnification from Sun, would > > > shield Apple from any lawsuit from Netapp. > > > > The patent holder is not compelled > > in any way to offer a license for use of the patent. Without a patent > > license, shipping products can be stopped dead in their tracks. > > It may be true, that Netapp could stop apple from shipping OSX, if Apple had > ZFS in OSX, and Netapp won the lawsuit. But there was a time when it was > absolutely possible for Sun & Apple to reach an agreement which would limit > Apple''s liability in the event of lawsuit waged against them. > > CDDL contains an explicit disclaimer of warranty, which means, if Apple were > to download CDDL ZFS source code and compile and distribute it themselves, > they would be fully liable for any lawsuit waged against them. But CDDL > also allows for Sun to distribute ZFS binaries under a different license, in > which Sun could have assumed responsibility for losses, in the event Apple > were to be sued.That would not, IMO, have prevented any potential stop-ship order from keeping MacOS X shipping. I just think it would have created a situation where Apple could have insisted that Oracle (well Sun) reimburse it for lost revenue. The lawyers at Sun were typically defensive in that they frowned (very much) upon any legal agreements which left Sun in a position if unlimited legal liability. This actually nearly prevented the development of certain software, since that software required an NDA clause which provided for unlimited liability due to lost revenue were Sun to leak the NDA content. (We developed the software using openly obtainable materials rather than NDA content, to prevent this possibility.) - Garrett
On 7/12/10 Jul 12, 10:49 AM, "Linder, Doug" <Doug.Linder at merchantlink.com> wrote:> Out of sheer curiosity - and I''m not disagreeing with you, just wondering - > how does ZFS make money for Oracle when they don''t charge for it? Do you > think it''s such an important feature that it''s a big factor in customers > picking Solaris over other platforms?I''m looking at a new web server for the company, and am considering Solaris specifically because of ZFS. (Oracle''s lousy sales model-- specifically the unwillingness to give a price for a Solaris support contract without my having to send multiple emails to multiple addresses-- may yet push me back to my default CentOS platform, but to the extent that Oracle is even in the running it''s because of ZFS.) -- Dave Pooser, ACSA Manager of Information Services Alford Media http://www.alfordmedia.com
Edward Ned Harvey <solaris2 at nedharvey.com> wrote:> CDDL contains an explicit disclaimer of warranty, which means, if Apple were > to download CDDL ZFS source code and compile and distribute it themselves, > they would be fully liable for any lawsuit waged against them. But CDDL > also allows for Sun to distribute ZFS binaries under a different license, in > which Sun could have assumed responsibility for losses, in the event Apple > were to be sued.And in terms of market and commerce, you will not find a partner that will grant you full liability for software you got for free. Apple could probably have a chance to get indemnified by Sun if they did pay royalties for ZFS...... J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
On Tue, Jul 13, 2010 at 11:40 PM, Edward Ned Harvey <solaris2 at nedharvey.com> wrote:> ZFS was the sole factor in my decision to buy a Sun server with solaris this > year, to replace my netapp. ?In addition, I bought some dell machines and > paid for solaris on those, to keep around as backup destinations for the > production sun file server. > > I absolutely do believe ZFS is a huge selling point for sun hardware and > solaris. ?Especially for file servers.Yes, as long as you''re buying that OS from Oracle. :-) But don''t forget that Oracle looks like killing OpenSolaris and entire community after all: there are no latest builds at genunix.org (latest is 134 and seems like that''s it), Oracle stopped build OSOL after build 135 (I have no idea where this build is) and Oracle is building "Solaris Next" or something like that ? I have no idea where to get that thing either. So no more free Solaris that you can use in a business, supporting by yourself, no more chance to build a reliable free storage or something like that (Nexenta is building their stuff on top of *outdated* 134 build). Latest checkout won''t build OS either (I tried and it fails). So the repository might be intentionally broken, in order you not to build stuff yourself, but actually go and buy Oracle product. Also no more free security updates and no more hardware-only support. That means that community soon will shrink to zero. Oracle basically lied about Fedora/RHEL model analogy (which would be great if that would happen). I wish I am wrong, but looks to me pretty much game over, folks: Oracle appeared to be complete idiots towards the community. Same probably will happen to Java. :-( -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Dave Pooser > > I''m looking at a new web server for the company, and am considering > Solaris > specifically because of ZFS. (Oracle''s lousy sales model-- specifically > the > unwillingness to give a price for a Solaris support contract without my > having to send multiple emails to multiple addresses-- may yet push me > back > to my default CentOS platform, but to the extent that Oracle is even in > the > running it''s because of ZFS.)Here''s a really simple way to get some pricing information: Go to Dell.com. Servers. Servers. Rack. Enhanced. PowerEdge R710 (Customize.) You could pick any server that supports solaris. I just chose the R710 because I know it does. Operating system: None $1299 ($0) 1yr basic $1768 ($469) 1yr standard $2149 ($850) 1yr premium $2569 ($1270) 3yr basic $2533 ($1234) 3yr standard $3619 ($2320) 3yr premium $4753 ($3454) If you''re new to solaris etc, I might not recommend the Dell because installation isn''t straightforward. Hardware support exists, but it''s less "enterprise" than what you might expect. The sun hardware is the recommended way to go, but it''s also more expensive. Then again, if you''re considering centos, you''re probably not running on "enterprise" grade hardware. I know I can''t get centos to reliably run things like OpenManage for raid controller configuration. Which is necessary if you want to replace hotspare drives without rebooting the server. You don''t need openmanage if you have no hotspares ... for example ... all disks in a raid6 would be ok and autoresilver without intervention. I do run solaris on an R710. There is no openmanage, but there is MegaCLI, which was ridiculously hard to find, and ridiculously confusing to use, and poorly documented and poorly supported if you''re confused. On the dell, assuming you use perc, assuming you have a hotspare, the recommended solution for linux would be RHEL or SLES and not centos. You''d pay $350/yr as compared to solaris $470/yr
On 07/14/10 04:20 PM, Edward Ned Harvey wrote:> Here''s a really simple way to get some pricing information: > > Go to Dell.com. Servers. Servers. Rack. Enhanced. PowerEdge R710 > (Customize.) > You could pick any server that supports solaris. I just chose the R710 > because I know it does. > > Operating system: > None $1299 ($0) > 1yr basic $1768 ($469) > 1yr standard $2149 ($850) > 1yr premium $2569 ($1270) > 3yr basic $2533 ($1234) > 3yr standard $3619 ($2320) > 3yr premium $4753 ($3454) > > If you''re new to solaris etc, I might not recommend the Dell because > installation isn''t straightforward. Hardware support exists, but it''s less > "enterprise" than what you might expect. The sun hardware is the > recommended way to go, but it''s also more expensive. >Not in my neck of the woods, Sun have always been most competitive. -- Ian.
> From: BM [mailto:bogdan.maryniuk at gmail.com] > > But don''t forget that Oracle looks like killing OpenSolaris and entire > community > after all: there are no latest builds at genunix.org (latest is 134 and > seems > like that''s it), Oracle stopped build OSOL after build 135 (I have no > idea where > this build is)It is true there''s no new build published in the last 3 months. But you can''t use that to assume they''re killing the community. I have said many times: Consider what the possibilities are. Solaris 10 is lacking features and bugfixes which are present in the free opensolaris. Very marketable features, such as dedupe, and log device removal. Oracle has stated that their focus is on commercialization of solaris. Solaris 10 is long overdue for a new release. It''s entirely possible (and would make total sense) that they''re shifting development effort away from the opensolaris community, to push out the "Next" version of solaris... Probably called Oracle Solaris 11. They said they would release the next opensolaris in H1 this year, but they''re overdue. They also said they would release the next Solaris this year. It''s entirely possible they just need all the developers they have, to deliver that goal. IMHO, I think this possibility is more likely than the "we are killing opensolaris" possibility. The latter wouldn''t make any sense.> and Oracle is building "Solaris Next" or something like > that ? > I have no idea where to get that thing either.There''s no known name for that. The "Next" version of solaris, I suspect, will be called Oracle Solaris 11, but until that is announced, nobody knows. And it''s colloquially and unofficially called "Solaris Next" or "Solaris 11" Whenever that becomes available, it will be available via the usual commercial channels.
> From: Ian Collins [mailto:ian at ianshome.com] > > From: Edward Ned Harvey > > The sun hardware is the > > recommended way to go, but it''s also more expensive. > > Not in my neck of the woods, Sun have always been most competitive.Interesting. I wonder what''s different between you and me? I most often buy relatively low-end servers, say, one CPU, 4 cores, 16G ram, 6TB disks SATA. I might expect to pay $3k or $4k. Last October, I didn''t see any sun offering below $6k... I know Sun has better high end servers. Maybe that''s what you''re buying and maybe that''s the area where sun''s prices are more competitive?
On Wed, Jul 14, 2010 at 04:28:44PM +1200, Ian Collins wrote:> >If you''re new to solaris etc, I might not recommend the Dell because > >installation isn''t straightforward. Hardware support exists, but it''s less > >"enterprise" than what you might expect. The sun hardware is the > >recommended way to go, but it''s also more expensive. > > > > Not in my neck of the woods, Sun have always been most competitive.You find Sun to be a better deal than Supermicro? Especially, when you''re sticking a very large number of disks into it, and can''t source the diskless caddies elsewhere? -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
On Wed, Jul 14, 2010 at 6:42 PM, Eugen Leitl <eugen at leitl.org> wrote:>> Not in my neck of the woods, Sun have always been most competitive. > > You find Sun to be a better deal than Supermicro? Especially, > when you''re sticking a very large number of disks into it, and > can''t source the diskless caddies elsewhere?My few little cents here. I am running stuff on Supermicro and OpenSolaris, starting from snv_121 times. Supermicro is a very cheap yet also reliable stuff (which is very strange!!, ha-ha!). Saying "Sun hardware" is competitive ? I would doubt quince. The cheapest available from Sun is SunFire x2270 ? http://www.oracle.com/us/products/servers-storage/servers/x86/sun-fire-x2270-m2-ds-070252.pdf ? I have some experience with this machine and I have to say: while it is good machine and built well, yet it is very (I mean VERY) noisy, one non-redundant power supply (what a lose!) and it very-very non-green: will eat your power like a diesel locomotive. :) Now, guts inside are quite cheap, so basically it is just a label "Sun" on top of an average asian-built hardware. Yes, they are good machines, but at the same time nothing really special. Price is quite big. Supermicro is as same beast, just 10x times (well, maybe less) cheaper and in my case I had to remove DVD drive in order to let the thing boot to install the OpenSolaris ? somewhat OSOL could not detect the DVD drive and boot always hangs (at installation phase). So I installed the thing from USB stick. After that everything is OK. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
Roy Sigurd Karlsbakk
2010-Jul-14 13:49 UTC
[zfs-discuss] Legality and the future of zfs...
----- Original Message -----> On 7/12/10 Jul 12, 10:49 AM, "Linder, Doug" > <Doug.Linder at merchantlink.com> > wrote: > > > Out of sheer curiosity - and I''m not disagreeing with you, just > > wondering - > > how does ZFS make money for Oracle when they don''t charge for it? Do > > you > > think it''s such an important feature that it''s a big factor in > > customers > > picking Solaris over other platforms? > > I''m looking at a new web server for the company, and am considering > Solaris > specifically because of ZFS. (Oracle''s lousy sales model-- > specifically the > unwillingness to give a price for a Solaris support contract without > my > having to send multiple emails to multiple addresses-- may yet push me > back > to my default CentOS platform, but to the extent that Oracle is even > in the > running it''s because of ZFS.)1. Install Nexenta (free) or NexentaStor (costs a little more, but is supported commercially) on a box designed for storage, configure services needed there (NFS, CIFS, iSCSI, FC, whatever). 2. Install Your Favourite Operating System Or Distro (tm) on another box (or VM) and run the services there. Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
Roy Sigurd Karlsbakk
2010-Jul-14 13:52 UTC
[zfs-discuss] Legality and the future of zfs...
> I wish I am wrong, but looks to me pretty much game over, folks: > Oracle appeared to be complete idiots towards the community. Same > probably will happen to Java.Once the code is in the open, it''ll remain there. To quote Cory Doctorow on this, it''s easy release the source of a project, it''s like adding ink to your swimming pool, but it''s a little harder to remove the ink from the pool... Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
Dave Pooser wrote:> I''m looking at a new web server for the company, and am considering > Solaris specifically because of ZFS. (Oracle''s lousy sales model-- specifically > the unwillingness to give a price for a Solaris support contract without my > having to send multiple emails to multiple addresses-- may yet push me > back to my default CentOS platformI imagine this is probably more of a relic of Sun''s abominably horrible so-called "sales" department than an Oracle thing. Oracle, as inefficient as they are, at least *has* salespeople. But Sun... I think Sun could have been the next IBM if they had just wanted to... you, know - SELL STUFF. But their response was always the same: "order it from the web site. If you don''t want that, call our resellers." Basically, "talk to the hand". There were several times when I would essentially show up on Sun''s doorstep with a couple million dollars available to spend and said "hey, if you give us an hour or two of pre-sales support, we''ll give you this money." The response was, inevitably, "talk to a reseller". And resellers only know one sentence: "Send us what you need and we''ll fax you a quote." There was never any technical sales advice available whatsoever. Sun basically beat its customers away with sticks and middle fingers. While Dell and HP would come out and wine and dine the sysadmins, spew free swag around like a firehose, buy you lap dances (well, OK, maybe not that last one) and actually TRY to get sales... to Sun, pre-sales support didn''t even exist. Got a technical question? Read the web site. OK, rant over. Sorry for going off-topic. As you can see, I''m not bitter at all about the subject. :) ---------- Learn more about Merchant Link at www.merchantlink.com. THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are proprietary and confidential information intended only for the use of the recipient(s) named above. If you are not the intended recipient, you may not print, distribute, or copy this message or any attachments. If you have received this communication in error, please notify the sender by return e-mail and delete this message and any attachments from your computer.
Roy Sigurd Karlsbakk
2010-Jul-14 14:15 UTC
[zfs-discuss] Legality and the future of zfs...
----- Original Message -----> On Wed, Jul 14, 2010 at 6:42 PM, Eugen Leitl <eugen at leitl.org> wrote: > >> Not in my neck of the woods, Sun have always been most competitive. > > > > You find Sun to be a better deal than Supermicro? Especially, > > when you''re sticking a very large number of disks into it, and > > can''t source the diskless caddies elsewhere? > > My few little cents here.<snip/> We just got some supermicros with 2x12core opterons stuffed with 4GB chips (64GB total per box) and it cost us less than EUR 20k for all three of them. As I work for a research institute, we have some discounts from HP et al, but they didn''t get remotely close. Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
On Wed, 2010-07-14 at 11:42 +0200, Eugen Leitl wrote:> On Wed, Jul 14, 2010 at 04:28:44PM +1200, Ian Collins wrote: > > > >If you''re new to solaris etc, I might not recommend the Dell because > > >installation isn''t straightforward. Hardware support exists, but it''s less > > >"enterprise" than what you might expect. The sun hardware is the > > >recommended way to go, but it''s also more expensive. > > > > > > > Not in my neck of the woods, Sun have always been most competitive. > > You find Sun to be a better deal than Supermicro? Especially, > when you''re sticking a very large number of disks into it, and > can''t source the diskless caddies elsewhere? >Not to beat a dead horse here, but that''s an Apples-to-Oranges comparison (it''s raining idioms!). You can''t compare an OEM server (Dell, Sun, whatever) to a custom-built box from a parts assembler. Not that same thing. Different standards, different prices. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
On Wed, 2010-07-14 at 21:43 +0900, BM wrote:> On Wed, Jul 14, 2010 at 6:42 PM, Eugen Leitl <eugen at leitl.org> wrote: > >> Not in my neck of the woods, Sun have always been most competitive. > > > > You find Sun to be a better deal than Supermicro? Especially, > > when you''re sticking a very large number of disks into it, and > > can''t source the diskless caddies elsewhere? > > My few little cents here. > > I am running stuff on Supermicro and OpenSolaris, starting from > snv_121 times. Supermicro is a very cheap yet also reliable stuff > (which is very strange!!, ha-ha!). > > Saying "Sun hardware" is competitive ? I would doubt quince. The > cheapest available from Sun is SunFire x2270 ? > http://www.oracle.com/us/products/servers-storage/servers/x86/sun-fire-x2270-m2-ds-070252.pdf > ? I have some experience with this machine and I have to say: while it > is good machine and built well, yet it is very (I mean VERY) noisy, > one non-redundant power supply (what a lose!) and it very-very > non-green: will eat your power like a diesel locomotive. :) Now, guts > inside are quite cheap, so basically it is just a label "Sun" on top > of an average asian-built hardware. Yes, they are good machines, but > at the same time nothing really special. Price is quite big. > > Supermicro is as same beast, just 10x times (well, maybe less) cheaper > and in my case I had to remove DVD drive in order to let the thing > boot to install the OpenSolaris ? somewhat OSOL could not detect the > DVD drive and boot always hangs (at installation phase). So I > installed the thing from USB stick. After that everything is OK. >But you''re not doing an equal comparison. We''ve talked about this on this list before. OEM equipment has a whole bunch of different features that you can''t get via a build-it-yourself rig like Supermicro (even if you are having a whitebox vendor assemble the Supermicro and not do it yourself). Not just Sun equipment, but all OEM equipment is in a totally different class. Now, maybe you don''t want those extra features, and that''s fine. But don''t think that you can say "well, my Fiat (car) is better than your Peterbuilt (semi-), since it costs 10% of the price, and both can drive down the highway at 100 kph". Up front pricing is but one of many different aspects of buying a server, and for many of us, it''s not even the most important. When doing price comparison, you have to compare within the same class. Cross-class comparisons are generally meaningless, since they''re too different, and (more importantly), features have different values to different people. Pick the class of server you care about, *then* talk about pricing. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Erik Trimble wrote:> OEM equipment has a whole bunch of different features that you can''t > get via a build-it-yourself rig like Supermicro (even if you are having a > whitebox vendor assemble the Supermicro and not do it yourself). Not > just Sun equipment, but all OEM equipment is in a totally different > class.I completely agree with Erik. There''s just no comparison between the "bunch of commodity parts shoved in a box" el-cheapo systems and real, enterprise-calls systems. There''s *engineering* involved. Even if the specs look similar, the stability, reliability, and management features make all the difference. As he said, you may not need the extra quality and features, and if you don''t that''s fine. But comparing OEM equipment to system-builder hardware is apples and oranges. If you''re a small outfit that doesn''t have too many systems and can''t afford the higher-end stuff, then by all means go with what you can afford. If you''re a large corporation running business-critical apps and/or a high-availability environment but are just trying to save a few bucks, then replacing enterprise-class hardware with cheap substitutes is a false economy. You''ll end up spending a lot more in the long run on everything from repairs to downtime. ---------- Learn more about Merchant Link at www.merchantlink.com. THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are proprietary and confidential information intended only for the use of the recipient(s) named above. If you are not the intended recipient, you may not print, distribute, or copy this message or any attachments. If you have received this communication in error, please notify the sender by return e-mail and delete this message and any attachments from your computer.
On Wed, Jul 14, 2010 at 07:18:59AM -0700, Erik Trimble wrote:> Not to beat a dead horse here, but that''s an Apples-to-OrangesNo, no, ''e''s uh,...he''s resting.> comparison (it''s raining idioms!). You can''t compare an OEM server > (Dell, Sun, whatever) to a custom-built box from a parts assembler. Not > that same thing. Different standards, different prices.Sure, if your 3rd party disks don''t play nice with your chassis, or you need cubic-carbon-studded platinum level support for your mission critical piece of infrastructure you''re out out to lunch if it hits it ;p However, in a whole series of anecdotes I''ve done quite well ditching Dells and Suns and HPs for Supermicro, and sourcing disks (and sometimes memory) from the likes of TechData and IngramMicro. No doubt, others have very different stories to tell. -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
On Wed, Jul 14, 2010 at 11:27 PM, Erik Trimble <erik.trimble at oracle.com> wrote:> But you''re not doing an equal comparison.Thanks for the enlightening. I am also working in a datacenter, like you do, so I am also perfectly aware about hardware just like you are ? that''s for the record. I''ve picked up hardware and it supposed to assume that *equivalent* machine from SuperMicro is cheaper (because I assume I am talking to pros here?). Besides, I know what is inside that Sun Fire thing, I know the guts, manufacturer and I even know where the factory is located, BTW. Again: nothing special with that machine and it also has one one power supply (which is very convenient BOFH excuse when your Java failed on a trade bid). Other machines from what Snorcle offers ? that has better hardware (yes, it has) and also has better supplies etc ? but their prices are also damn bigger. Once again: I certainly like that Sun hardware and I think it is good one. But, again, I am saying that it is expensive stuff and Super Micro can easily replace that thing (unless Snorcle going to break Solaris intentionally not to boot on that hardware, which is very possible, because Oracle has less than zero trust among geeks).> OEM equipment has a whole bunch of different features that you can''t get > via a build-it-yourself rig like Supermicro (even if you are having a > whitebox vendor assemble the Supermicro and not do it yourself). ?Not > just Sun equipment, but all OEM equipment is in a totally different > class.Oh sure it must be so, since it is assembled in Oracle (well, not really, but at least logo is there). :-) And what are that outstanding features we can not get on equivalent Super Micro, I''d like to know? For example, what''s so special in that machine, in particular? Can you please tell me exactly, because I''d like to hear it explicitly? Or you want me to tell you a real cost-estimate for the actual parts and tell the actual price of each gut, including a case? It is almost like a cents, it is cheap like mushrooms. And folks @oracle.com perfecly knows that. But price is still huge. Question is for what exactly (I really don''t know why the price is so high ? maybe "Sun" logo contains pure platinum or chassis is golden? ? I don''t know)... Why price is so high?> Now, maybe you don''t want those extra features, and that''s fine. ?But > don''t think that you can say "well, my Fiat (car) is better than your > Peterbuilt (semi-), since it costs 10% of the price, and both can drive > down the highway at 100 kph". ?Up front pricing is but one of many > different aspects of buying a server, and for many of us, it''s not even > the most important.No-no, your Fiat is actually much worse, it is like a russian LADA. :) Because with SuperMicro for the same price I''ve got more RAM, better CPU, larger storage and TWO power supplies. And yes, it is more silent and takes less power, so more greener. Support is also very good: parts are replaced very quickly, if failed (we don''t have only just one box in our DC, you know). We also got hardware-only support (unlike Snorcle offers) and also price is much much smaller. But to be fair enough, I have to admit that LED indicators blinks better on Sun machines ? the color is more vivid, cool and an aluminum case is more look like it is an Apple XServe. :-)> When doing price comparison, you have to compare within the same class.Right. You need an explicit drill-down, fine. So the Sun Fire X2270 M2 Server on pure list price costs $3,962, one year warranty. And now identical Supermicro 6016T-TF with exactly the same config/warranty for the full price what I have to pull out my wallet is $2,190 ? which is mostly as twice as cheaper. I think red Oracle logo label (or blue Sun''s) must cost the rest ? must be made from a chunk of platinum... :) -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
Roy Sigurd Karlsbakk
2010-Jul-14 15:22 UTC
[zfs-discuss] Legality and the future of zfs...
----- Original Message -----> On Wed, Jul 14, 2010 at 07:18:59AM -0700, Erik Trimble wrote: > > > Not to beat a dead horse here, but that''s an Apples-to-Oranges > > No, no, ''e''s uh,...he''s resting. > > > comparison (it''s raining idioms!). You can''t compare an OEM server > > (Dell, Sun, whatever) to a custom-built box from a parts assembler. > > Not > > that same thing. Different standards, different prices. > > Sure, if your 3rd party disks don''t play nice with your chassis, or > you need cubic-carbon-studded platinum level support for your mission > critical piece of infrastructure you''re out out to lunch if > it hits it ;p > > However, in a whole series of anecdotes I''ve done quite well ditching > Dells and Suns and HPs for Supermicro, and sourcing disks (and > sometimes > memory) from the likes of TechData and IngramMicro. No doubt, others > have > very different stories to tell.I cannot but agree. We have just as little downtime with our supermicros than all the other hardware we have (mostly HP and Sun). So long as you get hardware that the OS understands and with which the drives work well, you''re set. If you need a Porsche or a Mercedes or a Subaru Impreza WRX STI, well, go get it. I''d rather get more Supermicros and let them play together, so that when one fails, the other still play, and beleive me, your Sun or HP or Dell or IBM systems will fail just as badly as the Supermicros. Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Erik Trimble > > Not to beat a dead horse here, but that''s an Apples-to-Oranges > comparison (it''s raining idioms!). You can''t compare an OEM server > (Dell, Sun, whatever) to a custom-built box from a parts assembler. > Not > that same thing. Different standards, different prices.I''ll second that. And I think this is how you can tell the difference: With supermicro, do you have a single support number to call and a 4hour onsite service response time? In order to provide that level of service, they need to have access to all the requisite parts, in a supply chain and warehousing, and they need to ensure some outside manufacturer doesn''t discontinue a specific model of component, or change the firmware in a way that will cause a new and unexpected problem. And so on. When you pay for the higher prices for OEM hardware, you''re paying for the knowledge of parts availability and compatibility. And a single point vendor who supports the system as a whole, not just one component.
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Edward Ned Harvey > > When you pay for the higher prices for OEM hardware, you''re paying for > the > knowledge of parts availability and compatibility. And a single point > vendor who supports the system as a whole, not just one component.For the record: I''m not saying this is always worth while. Sometimes I buy the enterprise product and triple-platinum support. Sometimes I buy generic blackboxes with mfgr warranty on individual components. It depends on your specific needs at the time. I will say, that I am a highly paid senior admin. I only buy the generic black boxes if I have interns or junior (no college level) people available to support them.
On Wed, Jul 14, 2010 at 12:57 PM, Edward Ned Harvey <solaris2 at nedharvey.com> wrote:>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of Edward Ned Harvey >> >> When you pay for the higher prices for OEM hardware, you''re paying for >> the >> knowledge of parts availability and compatibility. ?And a single point >> vendor who supports the system as a whole, not just one component. > > For the record: > > I''m not saying this is always worth while. ?Sometimes I buy the enterprise > product and triple-platinum support. ?Sometimes I buy generic blackboxes > with mfgr warranty on individual components. ?It depends on your specific > needs at the time. > > I will say, that I am a highly paid senior admin. ?I only buy the generic > black boxes if I have interns or junior (no college level) people available > to support them.Generic != black boxes. Quite the opposite. Some companies are successfully doing the opposite of you: They are using standard parts and a competent staff that knows how to create solutions out of them without having to pay for GUI-powered systems and a 4-hour on-site part swapping service. -- Giovanni Tirloni gtirloni at sysdroid.com
On Tue, 13 Jul 2010, Edward Ned Harvey wrote:> It is true there''s no new build published in the last 3 months. But you > can''t use that to assume they''re killing the community.Hmm, the community seems to think they''re killing the community: http://developers.slashdot.org/story/10/07/14/1448209/OpenSolaris-Governing-Board-Closing-Shop?from=rss ZFS is great. It''s pretty much the only reason we''re running Solaris. But I don''t have much confidence Oracle Solaris is going to be a product I''m going to want to run in the future. We barely put our ZFS stuff into production last year but quite frankly I''m already on the lookout for something to replace it. No new version of OpenSolaris (which we were about to start migrating to). No new update of Solaris 10. *Zero* information about what the hell''s going on... ZFS will surely live on as the filesystem under the hood in the doubtlessly forthcoming Oracle "database appliances", and I''m sure they''ll keep selling their NAS devices. But for home users? I doubt it. I was about to build a big storage box at home running OpenSolaris, I froze that project. Oracle is all about the money. Which I guess is why they''re succeeding and Sun failed to the point of having to sell out to them. My home use wasn''t exactly going to make them a profit, but on the other hand, the philosophy that led to my not using the product at home is a direct cause of my lack of desire to continue using it at work, and while we''re not exactly a huge client we''ve dropped a decent penny or two in Sun''s wallet over the years. Who knows, maybe Oracle will start to play ball before August 16th and the OpenSolaris Governing Board won''t shut themselves down. But I wouldn''t hold my breath. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768
On Wed, 14 Jul 2010, Roy Sigurd Karlsbakk wrote:> Once the code is in the open, it''ll remain there. To quote Cory Doctorow > on this, it''s easy release the source of a project, it''s like adding ink > to your swimming pool, but it''s a little harder to remove the ink from > the pool...Woo-hoo, the code already released won''t be taken back ;). But considering virtually all zfs development has been and presumably will continue to be by Sun/Oracle employees, that code is going to get stale pretty quick if they stop contributing to it... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768
On Thu, Jul 15, 2010 at 5:57 AM, Paul B. Henson <henson at acm.org> wrote:> ZFS is great. It''s pretty much the only reason we''re running Solaris.Well, if this is the the only reason, then run FreeBSD instead. I run Solaris because of the kernel architecture and other things that Linux or any BSD simply can not do. For example, running something on a port below 1000, but as a true non-root (i.e. no privileges dropping, but straight-forward run by a non-root).> No new version of OpenSolaris (which we were about to start migrating to). > No new update of Solaris 10. *Zero* information about what the hell''s going > on...Snorcle is simply killing the community. That''s what just happening. They think they will be like an Apple or Microsoft to compete with IBM: if you want Solaris, then buy it. Business plan is pretty clear: screw you, community, we don''t need ya. Well, not surprising ? they always was dumb towards the community, since they don''t really understands its importance. The main problem is that Solaris will be not popular anymore. Just for a record: Solaris popularity grew because of OpenSolaris. If you give it to geeks, they will play with it, build stuff, build tools, conquer infrastructures and then spread proprietary software, like Oracle DB, for example. That would lead to more knowledge base, to more experience and brains availability. But what will happen: geeks will dump OpenSolaris into the trash and will never make it any better. Just for a record: Solaris 9 and 10 from Sun was a plain crap to work with, and still is inconvenient conservative stagnationware. They won''t build a free cool tools for Solaris, hence the whole thing will turned to be a dry job for trained monkeys wearing suits in a corporations. Nothing more. That''s a philosophy of last decade, but IT now is very changing and is very different. That is why Oracle''s idea to kill community is totally stupid. And that''s why IBM will win, because you run the same Linux on their hardware as you run at your home. Yes, Oracle will run good for a while, using the inertia of a hype (and latest their financial report proves that), but soon people will realize that Oracle is just another evil mean beast with great marketing and the same sh*tty products as they always had. Buy Solaris for any single little purpose? No way ever! I may buy support and/or security patches, updates. But not the OS itself. If that is the only option, then I''d rather stick to Linux from other vendor, i.e. RedHat. That will lead me to no more talk to Oracle about software at OS level, only applications (if I am an idiot enough to jump into APEX or something like that). Hence, if all I can do is talk only about hardware (well, not really, because no more hardware-only support!!!), then I''d better talk to IBM, if I need a brand and I consider myself too dumb to get SuperMicro instead. IBM System x3550 M3 is still better by characteristics than equivalent from Oracle, it is OEM if somebody needs that at first place and is still cheaper than Oracle''s similar class. And IBM stuff just works great (at least if we talk about hardware). I think Oracle is simply screwing themselves here. They don''t realize and understand that yet, but they will. That reminds me the same story about G1 garbage collector in Java which Sun wanted you to buy.> ZFS will surely live on as the filesystem under the hood in the doubtlessly > forthcoming Oracle "database appliances", and I''m sure they''ll keep selling > their NAS devices.To be honest, if I have to sell my soul to the Oracle, I''d rather will stay with ext4 on Linux. Screw ZFS as well, Oracle can choke down itself with it. Yes, ext is pain in a butt, requires more dance around, but Google lives with it very well (as well as thousands of other companies) and also ext still it gives you a freedom.> But for home users? I doubt it. I was about to build a > big storage box at home running OpenSolaris, I froze that project.Same here. A lot of nice ideas and potential open-source tools basically frozen and I think gonna be dumped. We (geeks) won''t build stuff for Larry just for free. We need OS back opened in reward. So I think OpenSolaris is pretty much game over, thanks to the Oracle. Some Oracle fanboys might call it a plain FUD, hope to get updates etc, but the reality is that Oracle to OpenSolaris is pretty much the same what Palm did for BeOS. Enjoy your last svn_134 build. P.S. At least Java won''t die that easy, hopefully. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
On Wed, 2010-07-14 at 13:59 -0700, Paul B. Henson wrote:> On Wed, 14 Jul 2010, Roy Sigurd Karlsbakk wrote: > > > Once the code is in the open, it''ll remain there. To quote Cory Doctorow > > on this, it''s easy release the source of a project, it''s like adding ink > > to your swimming pool, but it''s a little harder to remove the ink from > > the pool... > > Woo-hoo, the code already released won''t be taken back ;). But considering > virtually all zfs development has been and presumably will continue to be > by Sun/Oracle employees, that code is going to get stale pretty quick if > they stop contributing to it... > >I wish folks would realize something important: Continued release of the source code for ON is *not* dependent on any "OpenSolaris" community or on any binary distribution. I *strongly* doubt that Oracle is going to stop making source code available.... it costs them almost nothing (compared to the rest of the "community" efforts, which did cost Sun significantly with little or no gain). Furthermore, the open source of Solaris is critical to many Solaris customers, and I think exec mgmt realizes that if they tried to close it back up, they''d lose far more than just Solaris customers, but probably also Oracle DB customers. The *code* is probably not going away (even updates to the kernel). Even if the community dies, is killed, or commits OGB induced suicide. There is another piece I''ll add: even if Oracle were to stop releasing ZFS or OpenSolaris source code, there are enough of us with a vested interest (commercial!) in its future that we would continue to develop it outside of Oracle. It won''t just go stagnant and die. I believe I can safely say that Nexenta is committed to the continued development and enhancement of this code base -- and to doing so in the open. - Garrett
On Thu, Jul 15, 2010 at 12:49 AM, Edward Ned Harvey <solaris2 at nedharvey.com> wrote:> I''ll second that. ?And I think this is how you can tell the difference: > With supermicro, do you have a single support number to call and a 4hour > onsite service response time?Yes. BTW, just for the record, people potentially have a bunch of other supermicros in a stock, that they''ve bought for the rest of the money that left from a budget that was initially estimated to get shiny Sun/Oracle hardware. :) So normally you put them online in a cluster and don''t really worry that one of them gone ? just power that thing down and disconnect from the whole grid.> When you pay for the higher prices for OEM hardware, you''re paying for the > knowledge of parts availability and compatibility. And a single point > vendor who supports the system as a whole, not just one component.What exactly kind of compatibility you''re talking about? For example, if I remove my broken mylar air shroud for X8 DP with a MCP-310-18008-0N number because I step on it accidentally :-D, pretty much I think I am gonna ask them to replace exactly THAT thing back. Or you want to let me tell you real stories how OEM hardware is supported and how many emails/phonecalls it involves? One of the very latest (just a week ago): Apple Support reported me that their engineers in US has no green idea why Darwin kernel panics on their XServe, so they suggested me replace mother board TWICE and keep OLDER firmware and never upgrade, since it will cause crash again (although identical server works just fine with newest firmware)! I told them NNN times that traceback of Darwin kernel was yelling about ACPI problem and gave them logs/tracebacks/transcripts etc, but they still have no idea where is the problem. Do I need such "support"? No. Not at all. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
On Thu, Jul 15, 2010 at 10:53 AM, Garrett D''Amore <garrett at nexenta.com> wrote:> The *code* is probably not going away (even updates to the kernel). > Even if the community dies, is killed, or commits OGB induced suicide.1. You used correct word: "probably". 2. No community = stale outdated code.> There is another piece I''ll add: even if Oracle were to stop releasing > ZFS or OpenSolaris source code, there are enough of us with a vested > interest (commercial!) in its future that we would continue to develop > it outside of Oracle. ?It won''t just go stagnant and die.So you''re saying "let''s fork it". Let''s imagine some red-eyed zealots decided to do so and did that. They have a shiny new Mercurial repo. Now what? Yet another very dead GNU/Hurd? Let''s think through: to fork is to hope for the new product will take off and will be popular in the hackerdom, so the geeks can make new stuff for that, use it, fix it, build knowledge base how to fix foobar when it happens, some best practices etc. The hackerdom IS the place where new real specialists are made. So if Solaris will be not any free, then nobody gives a shell about this OS and it will be as "popular" as AIX from IBM, where you simply can not find any specialists that could support it. Why? Because nobody knows AIX and does not want to know that. There is no much enhancements to AIX other than done by IBM in their Frankenstein''s way. But hey, why to fork ZFS and mess with a stale Solaris code, if the entire future of Solaris is a closed proprietary payware anyway? And opposite to ZFS, we have totally free BTRFS that has been moved to the kernel.org and is *free* and is for Linux that is *already* popular AND *free*? Yes, Linux is not the best OS, if you compare to Solaris in some technical parts that would make things just more sophisticated. But on the other hand Linux is totally free, cheap and you can live with these inconveniences perfectly (just drink more water and breath more deeply). You can curse these inconveniences, but at the end it still works cheap and reliable and is just OK to get things done. Well, BTRFS sucks at some points (software RAID at kernel level comes to mind), but it is still better FS for Linux in many places than extN, but it is still free and more popular. Maybe today BTRFS is not the right answer as ZFS is to the market, but tomorrow it probably will be just as opposite, I think: geeks will use BTRFS and Linux and soon Oracle will deeply regret they''re killed Solaris, but no one will throw their energy to make Solaris at least as strong as Linux is now.>?I believe I can safely say that Nexenta is committed to the continued development and enhancement of this code base -- and to doing so in the open.Yeah, and Nexenta is also committed to backport newest updates from 140 and younger builds just back to snv_134. So I can imagine that soon new OS from Nexenta will be called "Super Nexenta Version 134". :-) Currently from what I see, I think Nexenta will also die eventually. Because of BTRFS for Linux, Linux''s popularity itself and also thanks to the Oracle''s help. Sorry telling this to you, working @nexenta.com though... You, guys, are doing a very good job, but in fact, your days are also doomed, I think. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
On Thu, 2010-07-15 at 11:48 +0900, BM wrote:> > But hey, why to fork ZFS and mess with a stale Solaris code, if the > entire future of Solaris is a closed proprietary payware anyway? And > opposite to ZFS, we have totally free BTRFS that has been moved to the > kernel.org and is *free* and is for Linux that is *already* popular > AND *free*? Yes, Linux is not the best OS, if you compare to Solaris > in some technical parts that would make things just more > sophisticated. But on the other hand Linux is totally free, cheap and > you can live with these inconveniences perfectly (just drink more > water and breath more deeply). You can curse these inconveniences, but > at the end it still works cheap and reliable and is just OK to get > things done. Well, BTRFS sucks at some points (software RAID at kernel > level comes to mind), but it is still better FS for Linux in many > places than extN, but it is still free and more popular. Maybe today > BTRFS is not the right answer as ZFS is to the market, but tomorrow it > probably will be just as opposite, I think: geeks will use BTRFS and > Linux and soon Oracle will deeply regret they''re killed Solaris, but > no one will throw their energy to make Solaris at least as strong as > Linux is now.I think you''re wrong on so many points here about ZFS that I don''t know where to begin, so I won''t even try. I am curious why you''re hanging about in here though, if you''re so convinced that there is no future in ZFS.> > > I believe I can safely say that Nexenta is committed to the continued development and enhancement of this code base -- and to doing so in the open. > Yeah, and Nexenta is also committed to backport newest updates from > 140 and younger builds just back to snv_134. So I can imagine that > soon new OS from Nexenta will be called "Super Nexenta Version 134".We''re working on upgrading to a newer version, but its too risky to put the latest code into production yet. For 4.x which is in early development right now, we will have much newer bits.> :-) > > Currently from what I see, I think Nexenta will also die eventually. > Because of BTRFS for Linux, Linux''s popularity itself and also thanks > to the Oracle''s help. Sorry telling this to you, working @nexenta.com > though... You, guys, are doing a very good job, but in fact, your days > are also doomed, I think.I think you''re wrong. Very much so. And unlike you, I''ve put my money where my mouth is. That said, as you appear to be so firmly convinced that there is no possible positive way forward for ZFS or Solaris, I recommend you go elsewhere instead of apparently wasting your time here. You seem to be totally convinced in the future of Linux and BTRFS, so I recommend you leave this community and join that one. Meanwhile I''m committed to positive solutions. And I''ll have more to say about specific answers to some of the concerns raised here soon. But I''m focused on solving real problems right now, and don''t want to get caught up in a mail storm debate before the critical foundations are laid, so for now you''ll just have to be patient. In short, I''m not interested in hearing any more of the whining about how terrible things are. However, if you want to work on a positive solution, contact me out of band and I''ll talk with you more. - Garrett
> From: BM [mailto:bogdan.maryniuk at gmail.com] > > latest (just a week ago): Apple Support reported me that their > engineers in US has no green idea why Darwin kernel panics on theirStop it... You did *not* just use "apple" and "support" in the same sentence, did you?? ;-) You almost made me spray beer out my nose!
On Thu, Jul 15, 2010 at 12:35 PM, Garrett D''Amore <garrett at nexenta.com> wrote:> That said, as you appear to be so firmly convinced that there is no > possible positive way forward for ZFS or Solaris, I recommend you go > elsewhere instead of apparently wasting your time here.Thanks a lot for recommendation, but I think I will figure out where to go without your help. :)>?You seem to be totally convinced in the future of Linux and BTRFS, > so I recommend you leave this community and join that one.Neither I convinced or not. All I say is: 1. There is no new builds. 2. There is no communication from Oracle and they ignore OpenSolaris. 3. OGB decided to suicide and they, actually, says the same. 4. There is no newer Nexenta either ? it is all built on top of old build. 5. I want to believe I am wrong and all this above is also BS. But unlikely...> In short, I''m not interested in hearing any more of the whining about > how terrible things are.?However, if you want to work on a positive > solution, contact me out of band and I''ll talk with you more.As long as Oracle will restore OpenSolaris almost dead community. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
On Thu, Jul 15, 2010 at 12:39 PM, Edward Ned Harvey wrote:>> latest (just a week ago): Apple Support reported me that their >> engineers in US has no green idea why Darwin kernel panics on their > > Stop it... ?You did *not* just use "apple" and "support" in the same sentence, did you?? ?;-) ?You almost made me spray beer out my nose!Yes, I did. ;) But at least now I know what means "number one customer satisfaction" in Apple way ? it is when the birdbrained dopey will finally understand your concern and will stop repeat the same question about Snow Leopard serial number correctness quince. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
On Wed, Jul 14, 2010 at 9:27 PM, BM <bogdan.maryniuk at gmail.com> wrote:> On Thu, Jul 15, 2010 at 12:49 AM, Edward Ned Harvey > <solaris2 at nedharvey.com> wrote: > > I''ll second that. And I think this is how you can tell the difference: > > With supermicro, do you have a single support number to call and a 4hour > > onsite service response time? > > Yes. > > BTW, just for the record, people potentially have a bunch of other > supermicros in a stock, that they''ve bought for the rest of the money > that left from a budget that was initially estimated to get shiny > Sun/Oracle hardware. :) So normally you put them online in a cluster > and don''t really worry that one of them gone ? just power that thing > down and disconnect from the whole grid. > > > When you pay for the higher prices for OEM hardware, you''re paying for > the > > knowledge of parts availability and compatibility. And a single point > > vendor who supports the system as a whole, not just one component. > > What exactly kind of compatibility you''re talking about? For example, > if I remove my broken mylar air shroud for X8 DP with a > MCP-310-18008-0N number because I step on it accidentally :-D, pretty > much I think I am gonna ask them to replace exactly THAT thing back. > Or you want to let me tell you real stories how OEM hardware is > supported and how many emails/phonecalls it involves? One of the very > latest (just a week ago): Apple Support reported me that their > engineers in US has no green idea why Darwin kernel panics on their > XServe, so they suggested me replace mother board TWICE and keep OLDER > firmware and never upgrade, since it will cause crash again (although > identical server works just fine with newest firmware)! I told them > NNN times that traceback of Darwin kernel was yelling about ACPI > problem and gave them logs/tracebacks/transcripts etc, but they still > have no idea where is the problem. Do I need such "support"? No. Not > at all. > > -- > Kind regards, BM > > Things, that are stupid at the beginning, rarely ends up wisely. > _______________________________________________ > >You''re clearly talking about something completely different than everyone else. Whitebox works GREAT if you''ve got 20 servers. Try scaling it to 10,000. "A couple extras" ends up being an entire climate controlled warehouse full of parts that may or may not be in the right city. Not to mention you''ve then got full-time staff on-hand to constantly be replacing parts. Your model doesn''t scale for 99% of businesses out there. Unless they''re google, and they can leave a dead server in a rack for years, it''s an unsustainable plan. Out of the fortune 500, I''d be willing to bet there''s exactly zero companies that use whitebox systems, and for a reason. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100714/c2da87f9/attachment.html>
On Thu, Jul 15, 2010 at 1:51 PM, Tim Cook <tim at cook.ms> wrote:>?Not to mention you''ve then got full-time staff on-hand to constantly be replacing > parts.Maybe I don''t understand something, but we also had on-hand full-time staff to constantly replacing Dell''s parts..., so what''s the problem? Dell or HP or Sun are crashing exactly as same as SuperMicro machines (well, not really: Dell is more horrible, if you ask). Vendor, that sells us SuperMicro boxes offers as same support as we could get from HP or Dell. So all we do is simply pull out off the rack the thing and let vendor takes care of it. Machines are built automatically from the kickstart. What exactly I am missing then?>?Your model doesn''t scale for 99% of businesses out there.?Unless > they''re google, and they can leave a dead server in a rack for years, it''s > an unsustainable plan.Not sure what you''re talking about here, but if I run a cluster, then I am probably OK if some node[s] gone. :) Now, how it does not scales, if the vendor that works with IBM directly (in my case there is no real IBM in the ?ber-country I am living but a third-party company that only merchandizing the name) came and took my hardware for repair. Vendor that works with the Dell (same situation) directly came and took my hardware for repair. Vendor that works with HP directly came and took my hardware for repair. Apple officially NOT repairing their XServe, but give parts to a third-party company that does the same to HP or IBM (!) or Dell or Supermicro ? that happens in the country I am living, yes. And now the vendor that works directly with Supermicro took my hardware for repair on the same conditions as others. In any case, no matter what box (white, black, beige, silver, green, red, purple) I still experiencing: 1. A downtime of the box (obviously). 2. A chain of phonecalls to support, language of which could be more censored. 3. A vendor coming and taking a brick with himself. 4. A some time for repair taking a while. 5. A smile from the vendor, when they returning the box back to the DC. This sequence yields to all the vendors I''ve mentioned. Now, what exactly is the problem other than just scary grandma''s stories that my model does not scales and big snow bear will eat me alive? I have to admit that I have no experience running 10K servers in one block like you do, so my respect is to you and I''d like to know the exact problems I might step into and the solution to avoid. Since you running this amount of machines, so you know it and you can share the experience. But from what I do have experience, I can not foresee some additional problems that we have with HP or Dell or Sun or IBM boxes. So could you please elaborate your statements? I would appreciate that (and some other folks here as well would be interested to listen to your lesson). Thank you. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
On 7/15/10 9:49 AM +0900 BM wrote:> On Thu, Jul 15, 2010 at 5:57 AM, Paul B. Henson <henson at acm.org> wrote: >> ZFS is great. It''s pretty much the only reason we''re running Solaris. > > Well, if this is the the only reason, then run FreeBSD instead. I run > Solaris because of the kernel architecture and other things that Linux > or any BSD simply can not do. For example, running something on a port > below 1000, but as a true non-root (i.e. no privileges dropping, but > straight-forward run by a non-root).Um, there''s plenty of things Solaris can do that Linux and FreeBSD can''t do, but non-root privileged ports is not one of them.
On Thu, Jul 15, 2010 at 4:38 PM, Frank Cusack <frank+lists/zfs at linetwo.net> wrote:> Um, there''s plenty of things Solaris can do that Linux and FreeBSD can''t > do, but non-root privileged ports is not one of them.Unfortunately, yes... :( -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
On Thu, July 15, 2010 09:38, Frank Cusack wrote:> On 7/15/10 9:49 AM +0900 BM wrote: > >> On Thu, Jul 15, 2010 at 5:57 AM, Paul B. Henson <henson at acm.org> wrote: >> >>> ZFS is great. It''s pretty much the only reason we''re running Solaris. >>> >> >> Well, if this is the the only reason, then run FreeBSD instead. I run >> Solaris because of the kernel architecture and other things that Linux >> or any BSD simply can not do. For example, running something on a port below 1000, but as a true >> non-root (i.e. no privileges dropping, but straight-forward run by a non-root). > > Um, there''s plenty of things Solaris can do that Linux and FreeBSD can''t > do, but non-root privileged ports is not one of them.Using least privileges'' "net_privaddr" allows a process to bind to a port below 1000 without granting full root access to the process owner. Regards, Siggi
On Jul 14, 2010, at 9:57 AM, Edward Ned Harvey wrote:>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of Edward Ned Harvey >> >> When you pay for the higher prices for OEM hardware, you''re paying for >> the >> knowledge of parts availability and compatibility. And a single point >> vendor who supports the system as a whole, not just one component. > > For the record: > > I''m not saying this is always worth while. Sometimes I buy the enterprise > product and triple-platinum support. Sometimes I buy generic blackboxes > with mfgr warranty on individual components. It depends on your specific > needs at the time. > > I will say, that I am a highly paid senior admin. I only buy the generic > black boxes if I have interns or junior (no college level) people available > to support them. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussAt a previous company, I used to think a black box solution was equivalent to the enterprise solutions. I changed my mind over time as the black boxes aged. A few of the problems: 1. The lights-out-management of black box solutions is shoddy at best. 2. Microcode updates for black box solutions are simply awful to implement. I bricked a number of boards because I couldn''t figure out which model of motherboard/bios was installed on the boxes. Network based microcode updates? No, try a floppy. 3. The black box ''fit'' in the case caused problems over time. The cables were sub-par and routed poorly, and the airflow simply sucked. 4. The hard drives ordered from Newegg, Fry''s or equivalent had more problems over time than their enterprise equivalent. 5. Even when supported, maintenance on the solutions (pulling them from racks, replacing boards, etc.) was far more difficult and time consuming than on their enterprise equivalent. As someone else pointed out, the black box solutions are cheap. I agree. They''re cheap initially, but a pain to support over time. As you mention above, if you''ve got a number of people to handle the above problems, they may be a good fit. Otherwise, no way. As always, YMMV. ----- Gregory Shaw, Enterprise IT Architect Phone: (303) 246-5411 Oracle Global IT Service Design Group 500 Eldorado Blvd, UBRM02-157 greg.shaw at oracle.com (work) Broomfield, CO 80021 gregs at fmsoft.com (home) Hoping the problem magically goes away by ignoring it is the "microsoft approach to programming" and should never be allowed. (Linus Torvalds)
On Thu, Jul 15, 2010 at 5:47 PM, Sigbjorn Lie <sigbjorn at nixtra.com> wrote:> Using least privileges'' "net_privaddr" allows a process to bind to a port below 1000 without > granting full root access to the process owner.Oh, I just wrongly read previous e-mail. "Unfortunately, yes" ? I meant that BSD and Linux lacks of lots of things that Solaris can do. And, yes, net_privaddr allows you to run anything below 1000 port from true non-root. :) -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
BM <bogdan.maryniuk at gmail.com> wrote:> >?You seem to be totally convinced in the future of Linux and BTRFS, > > so I recommend you leave this community and join that one. > > Neither I convinced or not. All I say is: > 1. There is no new builds.Do you like to tell us Linux is dead because you cannot get binaries from Linux Torvalds? If OpenSolaris is Open _Source_, we need source and this has not been cut off. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
On Wed, July 14, 2010 23:51, Tim Cook wrote:> On Wed, Jul 14, 2010 at 9:27 PM, BM <bogdan.maryniuk at gmail.com> wrote: > >> On Thu, Jul 15, 2010 at 12:49 AM, Edward Ned Harvey >> <solaris2 at nedharvey.com> wrote: >> > I''ll second that. And I think this is how you can tell the >> difference: >> > With supermicro, do you have a single support number to call and a >> 4hour >> > onsite service response time? >> >> Yes. >> >> BTW, just for the record, people potentially have a bunch of other >> supermicros in a stock, that they''ve bought for the rest of the money >> that left from a budget that was initially estimated to get shiny >> Sun/Oracle hardware. :) So normally you put them online in a cluster >> and don''t really worry that one of them gone ? just power that thing >> down and disconnect from the whole grid. >> >> > When you pay for the higher prices for OEM hardware, you''re paying for >> the >> > knowledge of parts availability and compatibility. And a single point >> > vendor who supports the system as a whole, not just one component. >> >> What exactly kind of compatibility you''re talking about? For example, >> if I remove my broken mylar air shroud for X8 DP with a >> MCP-310-18008-0N number because I step on it accidentally :-D, pretty >> much I think I am gonna ask them to replace exactly THAT thing back. >> Or you want to let me tell you real stories how OEM hardware is >> supported and how many emails/phonecalls it involves? One of the very >> latest (just a week ago): Apple Support reported me that their >> engineers in US has no green idea why Darwin kernel panics on their >> XServe, so they suggested me replace mother board TWICE and keep OLDER >> firmware and never upgrade, since it will cause crash again (although >> identical server works just fine with newest firmware)! I told them >> NNN times that traceback of Darwin kernel was yelling about ACPI >> problem and gave them logs/tracebacks/transcripts etc, but they still >> have no idea where is the problem. Do I need such "support"? No. Not >> at all. >> >> -- >> Kind regards, BM >> >> Things, that are stupid at the beginning, rarely ends up wisely. >> _______________________________________________ >> >> > > You''re clearly talking about something completely different than everyone > else. Whitebox works GREAT if you''ve got 20 servers. Try scaling it to > 10,000. "A couple extras" ends up being an entire climate controlled > warehouse full of parts that may or may not be in the right city. Not to > mention you''ve then got full-time staff on-hand to constantly be replacing > parts. Your model doesn''t scale for 99% of businesses out there. Unless > they''re google, and they can leave a dead server in a rack for years, it''s > an unsustainable plan. Out of the fortune 500, I''d be willing to bet > there''s exactly zero companies that use whitebox systems, and for a > reason.You might want to talk to Google about that; as I understand it they decided that buying expensive servers was a waste of money precisely because of the high numbers they needed. Even with the good ones, some will fail, so they had to plan to work very well through server failures, so they can save huge amounts of money on hardware by buying cheap servers rather than expensive ones. And your juxtaposition of "fortune 500" and "99% of businesses" is significant; possibly the Fortune 500, other than Google, use expensive proprietary hardware; but 99% of businesses out there are NOT in the Fortune 500, and mostly use whitebox systems (and not rackmount at all; they''ll have one or at most two tower servers). -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
On Thu, Jul 15, 2010 at 1:50 AM, BM <bogdan.maryniuk at gmail.com> wrote:> On Thu, Jul 15, 2010 at 1:51 PM, Tim Cook <tim at cook.ms> wrote: > > Not to mention you''ve then got full-time staff on-hand to constantly be > replacing > > parts. > > Maybe I don''t understand something, but we also had on-hand full-time > staff to constantly replacing Dell''s parts..., so what''s the problem? > Dell or HP or Sun are crashing exactly as same as SuperMicro machines > (well, not really: Dell is more horrible, if you ask). Vendor, that > sells us SuperMicro boxes offers as same support as we could get from > HP or Dell. So all we do is simply pull out off the rack the thing and > let vendor takes care of it. Machines are built automatically from the > kickstart. > > What exactly I am missing then? >I''m not sure why you would intentionally hire someone to be on staff to watch a tech from Dell come out and swap a part... I''m starting to think you HAVEN''T actually had any enterprise class boxes because your description of service and what you get is not at all what reality is.> > Your model doesn''t scale for 99% of businesses out there. Unless > > they''re google, and they can leave a dead server in a rack for years, > it''s > > an unsustainable plan. > > Not sure what you''re talking about here, but if I run a cluster, then > I am probably OK if some node[s] gone. :) > > Now, how it does not scales, if the vendor that works with IBM > directly (in my case there is no real IBM in the ?ber-country I am > living but a third-party company that only merchandizing the name) > came and took my hardware for repair. Vendor that works with the Dell > (same situation) directly came and took my hardware for repair. Vendor > that works with HP directly came and took my hardware for repair. >What are you talking about? They don''t "come and take your hardware". If you''re paying for a proper service contract a tech brings the hardware to your site and swaps out the defective part on the whole chassis right in your datacenter. Again, you''re talking like you''ve never owned a piece of enterprise hardware with a proper support contract.> Apple officially NOT repairing their XServe, but give parts to a > third-party company that does the same to HP or IBM (!) or Dell or > Supermicro ? that happens in the country I am living, yes. And now the > vendor that works directly with Supermicro took my hardware for repair > on the same conditions as others. In any case, no matter what box > (white, black, beige, silver, green, red, purple) I still > experiencing: >Apple isn''t an enterprise class server provider, I''m not even sure why you''d bring them into the conversation, other than once again, I think you have no idea what we''re talking about.> 1. A downtime of the box (obviously). > 2. A chain of phonecalls to support, language of which could be more > censored. > 3. A vendor coming and taking a brick with himself. > 4. A some time for repair taking a while. > 5. A smile from the vendor, when they returning the box back to the DC. >Not how it works, not even close. If you''ve got a contract, and you''ve got a bad piece of hardware, it''s generally one call, witha tech onsite in four hours to fix the problem.> > This sequence yields to all the vendors I''ve mentioned. > >No, it doesn''t.> Now, what exactly is the problem other than just scary grandma''s > stories that my model does not scales and big snow bear will eat me > alive? I have to admit that I have no experience running 10K servers > in one block like you do, so my respect is to you and I''d like to know > the exact problems I might step into and the solution to avoid. Since > you running this amount of machines, so you know it and you can share > the experience. But from what I do have experience, I can not foresee > some additional problems that we have with HP or Dell or Sun or IBM > boxes. > >Again, where are you planning on keeping all the spare parts required to service boxes on your own? Who is going to manage your inventory? Who is going to be on staff to replace parts?> So could you please elaborate your statements? I would appreciate that > (and some other folks here as well would be interested to listen to > your lesson). > > Thank you. > > -- > Kind regards, BM > > Things, that are stupid at the beginning, rarely ends up wisely. >Gladly, it''s clear you haven''t actually ever had a service call with a proper 4-hour support contract from any major vendor. The "steps" you describe above aren''t even close to how it actually works. Once again, 0 companies in the fortune 500. You can continue to rant about how great whiteboxes are, but reality is they don''t scale. You can break it down any way you''d like and that isn''t changing. If I didn''t know any better, I''d think you''re just another internet troll. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100715/9f0a0bc3/attachment.html>
On Thu, Jul 15, 2010 at 9:09 AM, David Dyer-Bennet <dd-b at dd-b.net> wrote:> > On Wed, July 14, 2010 23:51, Tim Cook wrote: > > On Wed, Jul 14, 2010 at 9:27 PM, BM <bogdan.maryniuk at gmail.com> wrote: > > > >> On Thu, Jul 15, 2010 at 12:49 AM, Edward Ned Harvey > >> <solaris2 at nedharvey.com> wrote: > >> > I''ll second that. And I think this is how you can tell the > >> difference: > >> > With supermicro, do you have a single support number to call and a > >> 4hour > >> > onsite service response time? > >> > >> Yes. > >> > >> BTW, just for the record, people potentially have a bunch of other > >> supermicros in a stock, that they''ve bought for the rest of the money > >> that left from a budget that was initially estimated to get shiny > >> Sun/Oracle hardware. :) So normally you put them online in a cluster > >> and don''t really worry that one of them gone ? just power that thing > >> down and disconnect from the whole grid. > >> > >> > When you pay for the higher prices for OEM hardware, you''re paying for > >> the > >> > knowledge of parts availability and compatibility. And a single point > >> > vendor who supports the system as a whole, not just one component. > >> > >> What exactly kind of compatibility you''re talking about? For example, > >> if I remove my broken mylar air shroud for X8 DP with a > >> MCP-310-18008-0N number because I step on it accidentally :-D, pretty > >> much I think I am gonna ask them to replace exactly THAT thing back. > >> Or you want to let me tell you real stories how OEM hardware is > >> supported and how many emails/phonecalls it involves? One of the very > >> latest (just a week ago): Apple Support reported me that their > >> engineers in US has no green idea why Darwin kernel panics on their > >> XServe, so they suggested me replace mother board TWICE and keep OLDER > >> firmware and never upgrade, since it will cause crash again (although > >> identical server works just fine with newest firmware)! I told them > >> NNN times that traceback of Darwin kernel was yelling about ACPI > >> problem and gave them logs/tracebacks/transcripts etc, but they still > >> have no idea where is the problem. Do I need such "support"? No. Not > >> at all. > >> > >> -- > >> Kind regards, BM > >> > >> Things, that are stupid at the beginning, rarely ends up wisely. > >> _______________________________________________ > >> > >> > > > > You''re clearly talking about something completely different than everyone > > else. Whitebox works GREAT if you''ve got 20 servers. Try scaling it to > > 10,000. "A couple extras" ends up being an entire climate controlled > > warehouse full of parts that may or may not be in the right city. Not to > > mention you''ve then got full-time staff on-hand to constantly be > replacing > > parts. Your model doesn''t scale for 99% of businesses out there. Unless > > they''re google, and they can leave a dead server in a rack for years, > it''s > > an unsustainable plan. Out of the fortune 500, I''d be willing to bet > > there''s exactly zero companies that use whitebox systems, and for a > > reason. > > You might want to talk to Google about that; as I understand it they > decided that buying expensive servers was a waste of money precisely > because of the high numbers they needed. Even with the good ones, some > will fail, so they had to plan to work very well through server failures, > so they can save huge amounts of money on hardware by buying cheap servers > rather than expensive ones. > >Obviously someone was going to bring up google, whose business model is unique, and doesn''t really apply to anyone else. Google makes it work because they order so many thousands of servers at a time, they can demand custom made parts for the servers, that are built to their specifications. Furthermore, the clustering and filesystem they use wouldn''t function at all for 99% of the workloads out there. Their core application: search, is what makes the hardware they use possible. If they were serving up a highly transactional database that required millisecond latency it would be a different story.> And your juxtaposition of "fortune 500" and "99% of businesses" is > significant; possibly the Fortune 500, other than Google, use expensive > proprietary hardware; but 99% of businesses out there are NOT in the > Fortune 500, and mostly use whitebox systems (and not rackmount at all; > they''ll have one or at most two tower servers).It isn''t a juxtaposition at all. I said it doesn''t SCALE for 99% of the businesses out there, specifically because I KNEW someone would bring up google. What google is doing is unique, and as such, they can find unique solutions to their problems. Just because a small company is using a whitebox, or small tower today, does not mean that model will SCALE. I can''t force you to read the whole sentence, but it might be beneficial to do so before replying. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100715/bf8d10b6/attachment.html>
On Thu, July 15, 2010 09:29, Tim Cook wrote:> On Thu, Jul 15, 2010 at 9:09 AM, David Dyer-Bennet <dd-b at dd-b.net> wrote: > >> >> On Wed, July 14, 2010 23:51, Tim Cook wrote:>> > You''re clearly talking about something completely different than >> everyone >> > else. Whitebox works GREAT if you''ve got 20 servers. Try scaling it >> to >> > 10,000. "A couple extras" ends up being an entire climate controlled >> > warehouse full of parts that may or may not be in the right city. Not >> to >> > mention you''ve then got full-time staff on-hand to constantly be >> replacing >> > parts. Your model doesn''t scale for 99% of businesses out there. >> Unless >> > they''re google, and they can leave a dead server in a rack for years, >> it''s >> > an unsustainable plan. Out of the fortune 500, I''d be willing to bet >> > there''s exactly zero companies that use whitebox systems, and for a >> > reason. >> >> You might want to talk to Google about that; as I understand it they >> decided that buying expensive servers was a waste of money precisely >> because of the high numbers they needed. Even with the good ones, some >> will fail, so they had to plan to work very well through server >> failures, >> so they can save huge amounts of money on hardware by buying cheap >> servers rather than expensive ones.> Obviously someone was going to bring up google, whose business model is > unique, and doesn''t really apply to anyone else. Google makes it work > because they order so many thousands of servers at a time, they can demand > custom made parts for the servers, that are built to their specifications.Certainly they''re one of the most unusual setups out there, in several ways (size, plus details of what they do with their computers.> Furthermore, the clustering and filesystem they use wouldn''t function at > all for 99% of the workloads out there. Their core application: search, > is > what makes the hardware they use possible. If they were serving up a > highly > transactional database that required millisecond latency it would be a > different story.Again, I''m not at all convinced of that "99%" bit. Obviously low-latency transactional database applications are about the polar opposite of what Google does. However, transactional database applications are nearer 1% than 99% of the workloads out there, at every shop I''ve worked at or seen detailed descriptions of. Big email farms, for example, don''t generally have that kind of database at all. Big web farms probably do have some databases used that way -- but not for that high a percentage of their traffic, and generally running on one big server while the web is spread across hundreds of servers. Akamai is more like Google in a bunch of ways than most places. Wikipedia and ebay and amazon have huge web front-ends, while also needing transactional database support. Um, maybe I''m getting really too far afield from ZFS. I''ll shut up now :-) . -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
On Thu, Jul 15, 2010 at 11:18 PM, Tim Cook <tim at cook.ms> wrote:> ...[All BS skipped]> Gladly, it''s clear you haven''t actually ever had a service call with a > proper 4-hour support contract from any major vendor.Blah-blah-blah... Mr. Capercaillie, you''re not listening to anybody except to yourself. 1. Vendors that works with SuperMicro DOES HAVE four-hour on-site immediate support, if you pay for that ? just like any other vendor. We didn''t had that since no need. But if you need that ? it is not an issue. 2. Support of SuperMicro in our country does not differs from especially IBM that is not even a real IBM, but a third-party company, or HP or Dell or especially Sun, which has even much more quirks and hassles around than SuperMicro support. Besides, before you post a nonsenses here rudely insulting people, do some little homework by trying googling next time. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
Ok guys, can we please kill this thread about commodity versus enterprise hardware? Let''s agree on one thing: Some people believe commodity hardware is just as good as enterprise systems. Other people do not believe that. In both situations, the conclusion has been reached based on personal experience. (I am one of the latter, and I have specific stories if anybody''s interested off-list.) Even if one of them isn''t as reliable as the other, it can still be acceptable in farms, where some number of failed systems is acceptable.
> Ok guys, can we please kill this thread about commodity versus enterprise > hardware?+1 -- Dave Pooser, ACSA Manager of Information Services Alford Media http://www.alfordmedia.com
On Fri, Jul 16, 2010 at 12:01 AM, Edward Ned Harvey <solaris2 at nedharvey.com> wrote:> Ok guys, can we please kill this thread about commodity versus enterprise hardware? > > Let''s agree on one thing: ?Some people believe commodity hardware is just as good as enterprise systems. ?Other people do not believe that. ?In both situations, the conclusion has been reached based on personal experience. ?(I am one of the latter, and I have specific stories if anybody''s interested off-list.) > > Even if one of them isn''t as reliable as the other, it can still be acceptable in farms, where some number of failed systems is acceptable. > >+1. The only thing that SuperMicro is not what you hurry to call "commodity": http://www.supermicro.com/products/nfo/files/SAS2/SAS2_1004.pdf ? these boxes works just fine. YMMV though. Cheers. -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
Bogdan Maryniuk wrote:> Or you want to let me tell you real stories how OEM hardware is > supported and how many emails/phonecalls it involves? One of the very > latest (just a week ago): Apple Support reported me that their > engineers in US has no green idea why Darwin kernel panics on theirYou can''t seriously be comparing Apple hardware to Sun hardware. Give me a break.> Do I need such "support"? No. Not at all.Hint: enterprise-class support != consumer-class support. You buy consumer hardware, you pay consumer prices, you get consumer support. ---------- Learn more about Merchant Link at www.merchantlink.com. THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are proprietary and confidential information intended only for the use of the recipient(s) named above. If you are not the intended recipient, you may not print, distribute, or copy this message or any attachments. If you have received this communication in error, please notify the sender by return e-mail and delete this message and any attachments from your computer.
On Fri, Jul 16, 2010 at 3:26 AM, Linder, Doug <Doug.Linder at merchantlink.com> wrote:> Hint: enterprise-class support != consumer-class support. >?You buy consumer hardware, you pay consumer prices, > you get consumer support.+1 -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
Obviously, this thread got sidetracked on a tangent about different types of hardware, be it mac mini, supermicro, dell, or sun/oracle. But I do find the subject of legality of ZFS and the netapp lawsuit(s) to be an interesting subject. (And thanks to whoever pointed out the information of how MS VSS is different and inferior; I didn''t know.) Does anybody still have anything to add about the legality? I think we''ve concluded: ZFS is pretty darn safe in solaris/opensolaris. For some time to come, Netapp may continue bullying others. There is dispute regarding how much the Netapp lawsuit influenced Apple to abandon ZFS in OSX. ZFS is a selling feature for solaris & sun/oracle hardware. BTRFS may be an alternative someday, but not usable yet.
> Losing ZFS would indeed be disastrous, as it would > leave Solaris with > only the Veritas File System (VxFS) as a semi-modern > filesystem, and a > non-native FS at that (i.e. VxFS is a 3rd-party > for-pay FS, which > severely inhibits its uptake). UFS is just way to old > to be competitive > these days.Having come to depend on them, the absence of some of the features would certainly be significant. But how come everyone forgets about QFS? http://www.sun.com/storage/management_software/data_management/qfs/index.xml http://en.wikipedia.org/wiki/QFS http://hub.opensolaris.org/bin/view/Project+samqfs/WebHome -- This message posted from opensolaris.org
> On Tue, 13 Jul 2010, Edward Ned Harvey wrote: > > > It is true there''s no new build published in the > last 3 months. But you > > can''t use that to assume they''re killing the > community. > > Hmm, the community seems to think they''re killing the > community: > > http://developers.slashdot.org/story/10/07/14/1448209 > /OpenSolaris-Governing-Board-Closing-Shop?from=rss > > > ZFS is great. It''s pretty much the only reason we''re > running Solaris. But I > don''t have much confidence Oracle Solaris is going to > be a product I''m > going to want to run in the future. We barely put our > ZFS stuff into > production last year but quite frankly I''m already on > the lookout for > something to replace it. > > No new version of OpenSolaris (which we were about to > start migrating to). > No new update of Solaris 10. *Zero* information about > what the hell''s going > on...Presumably if you have a maintenance contract or some other formal relationship, you could get an NDA briefing. Not having been to one yet myself, I don''t know what that would tell you, but presumably more than without it. Still, the silence is quite unhelpful, and the apparent lack of anyone willing to recognize that, and with the authority to do anything about it, is troubling.> ZFS will surely live on as the filesystem under the > hood in the doubtlessly > forthcoming Oracle "database appliances", and I''m > sure they''ll keep selling > their NAS devices. But for home users? I doubt it. I > was about to build a > big storage box at home running OpenSolaris, I froze > that project. Oracle > is all about the money. Which I guess is why they''re > succeeding and Sun > failed to the point of having to sell out to them. My > home use wasn''t > exactly going to make them a profit, but on the other > hand, the philosophy > that led to my not using the product at home is a > direct cause of my lack > of desire to continue using it at work, and while > we''re not exactly a huge > client we''ve dropped a decent penny or two in Sun''s > wallet over the years.FWIW, you''re not the only one that''s tried to make that point!> Who knows, maybe Oracle will start to play ball > before August 16th and the > OpenSolaris Governing Board won''t shut themselves > down. But I wouldn''t hold > my breath.Postponement of respiration pending hypothetical actions by others is seldom an effective survival strategy. Nevertheless, the zfs on my Sun Blade 2000 currently running SXCE snv_97 (pending luactivate and reboot to switch to snv_129) is doing just fine with what is presently 3TB of redundant storage, and will eventually grow to 9TB as I populate the rest of the slots in my JBOD (8 slots; 2 x 1TB mirror for root; presently also 2 x 2TB mirror for data, but that will change to 5 x 2TB raidz + 1 2TB hot spare when I can afford four more 2TB drives). I have a spare power supply and some other odds and ends for the Sun Blade 2000, so, with fingers crossed, it will run (and heat my house :-) for quite some time to come, regardless of availability of future software updates. If not, I''m sure I have an ISO of SXCE 129 or so for x86 somewhere too, which I could put on any cheap x86 box with a PCIx slot for my SAS controller, and just import the zpools and go. -- This message posted from opensolaris.org
> never make it any better. Just for a record: Solaris > 9 and 10 from Sun > was a plain crap to work with, and still is > inconvenient conservative > stagnationware. They won''t build a free cool toolsEverybody but geeks _wants_ stagnationware, if you means something that just runs. Even my old Sun Blade 100 at home still has Solaris 9 on it, because I haven''t had a day to kill to split the mirror, load something newer like the last SXCE, and get everything on there working on it. (My other SPARC is running a semi-recent SXCE, and pending activation of an already installed most recent SXCE. Sitting at a Sun, I still prefer CDE to GNOME, and the best graphics card I have for that box won''t work with the newer Xorg server, so I can''t see putting OpenSolaris on it.) For instance, recent enough Solaris 10 updates to be able to do zfs root are pretty decent; you get into the habit of doing live upgrades even for patching, so you can minimize downtime. Hardly stagnant, considering that the initial release of Solaris 10 didn''t even have zfs in it yet.> for Solaris, hence > the whole thing will turned to be a dry job for > trained monkeys > wearing suits in a corporations. Nothing more. That''s > a philosophy of > last decade, but IT now is very changing and is very > different. That > is why Oracle''s idea to kill community is totally > stupid. And that''s > why IBM will win, because you run the same Linux on > their hardware as > you run at your home. > > Yes, Oracle will run good for a while, using the > inertia of a hype > (and latest their financial report proves that), but > soon people will > realize that Oracle is just another evil mean beast > with great > marketing and the same sh*tty products as they always > had. Buy Solaris > for any single little purpose? No way ever! I may buy > support and/or > security patches, updates. But not the OS itself. If > that is the only > option, then I''d rather stick to Linux from other > vendor, i.e. RedHat. > That will lead me to no more talk to Oracle about > software at OS > level, only applications (if I am an idiot enough to > jump into APEX or > something like that). Hence, if all I can do is talk > only about > hardware (well, not really, because no more > hardware-only support!!!), > then I''d better talk to IBM, if I need a brand and I > consider myself > too dumb to get SuperMicro instead. IBM System x3550 > M3 is still > better by characteristics than equivalent from > Oracle, it is OEM if > somebody needs that at first place and is still > cheaper than Oracle''s > similar class. And IBM stuff just works great (at > least if we talk > about hardware).I''m not going to say you''re wrong, because in part I agree with you. Systems people can run at home, desktops, laptops, those are all what get future mindshare and eventually get people with big bucks spending them. But the simple fact that Sun went down suggests that just being all lovey-dovey (and plenty of people thought that Sun wasn''t lovey-dovey _enough_?) won''t keep you in business either. [...]> > But for home users? I doubt it. I was about to > build a > > big storage box at home running OpenSolaris, I > froze that project.Mine''s running SXCE, and unless I can find a solution to getting decent graphics working with Xorg on it, probably always will be. But the big (well, target 9TB redundant; presently 3TB redundant) storage is doing just fine. Being super latest and greatest just isn''t necessary for that.> Same here. A lot of nice ideas and potential > open-source tools > basically frozen and I think gonna be dumped. We > (geeks) won''t build > stuff for Larry just for free. We need OS back opened > in reward. So I > think OpenSolaris is pretty much game over, thanks to > the Oracle. Some > Oracle fanboys might call it a plain FUD, hope to get > updates etc, but > the reality is that Oracle to OpenSolaris is pretty > much the same what > Palm did for BeOS. > > Enjoy your last svn_134 build. >I can''t rule out that possibility, but I see some reasons to think that it''s worth being patient for a couple more months. As it is, I find myself updating my Mac and Windows every darn week; so I''m pretty much past getting a kick out of updating just to see what''s kewl. -- This message posted from opensolaris.org
> > It''d be handy to have a mechanism where > applications could register for > > snapshot notifications. When one is about to > happen, they could be told > > about it and do what they need to do. Once all the > applications have > > acknowledged the snapshot alert--and/or after a > pre-set timeout--the file > > system would create the snapshot, and then notify > the applications that > > it''s done. > > > Why would an application need to be notified? I think > you''re under the > misconception that something happens when a ZFS > snapshot is taken. > NOTHING happens when a snapshot is taken (OK, well, > there is the > snapshot reference name created). Blocks aren''t moved > around, we don''t > copy anything, etc. Applications have no need to "do > anything" before a > snapshot it taken.It would be nice to have applications request to be notified before a snapshot is taken, and when that have requested notification have acknowledged that they''re ready, the snapshot would be taken; and then another notification sent that it was taken. Prior to indicating they were ready, the apps could have achieved a logically consistent on disk state. That would eliminate the need for (for example) separate database backups, if you could have a snapshot with the database on it in a consistent state. If I undertand correctly, _that''s_ what the notification mechanism on Windows achieves. Of course, another approach would be for a zfs aware app to be keeping its storage on a dedicated filesystem or zvol, and itself control when snapshots were taken of that. As lightweight as zvols and filesystems are under zfs, having each app that needed such functionality have its own would be no big deal, and would even be handy insofar as each app could create snapshots on its own independent schedule. Either way, the apps would have to be aware of how to participate in coordinating their logical consistency on disk with the snapshot (or vice versa).> > Given that snapshots will probably be more popular > in the future (WAFL > > NFS/LUNs, ZFS, Btrfs, VMware disk image snapshots, > etc.), an agreed upon > > consensus would be handy (D-Bus? POSIX?).Hypothetically, one could hide some of the details with suitable libraries and infrastructure. -- This message posted from opensolaris.org
Richard Elling
2010-Jul-16 14:15 UTC
[zfs-discuss] snapshot notification [was: Legality and the future of zfs...]
On Jul 16, 2010, at 3:39 PM, Richard L. Hamilton wrote:> Of course, another approach would be for a zfs aware app to be > keeping its storage on a dedicated filesystem or zvol, and itself > control when snapshots were taken of that. As lightweight as > zvols and filesystems are under zfs, having each app that needed > such functionality have its own would be no big deal, and would > even be handy insofar as each app could create snapshots on > its own independent schedule.No new API is needed. Simply delegate to the owner of the process the ability to take snapshots. You need to do this anyway, for security purposes. Then use open() to create a file in the .zfs snapshot subdirectory. -- richard -- Richard Elling richard at nexenta.com +1-760-896-4422
On Fri, July 16, 2010 08:39, Richard L. Hamilton wrote:>> > It''d be handy to have a mechanism where >> applications could register for >> > snapshot notifications. When one is about to >> happen, they could be told >> > about it and do what they need to do. Once all the >> applications have >> > acknowledged the snapshot alert--and/or after a >> pre-set timeout--the file >> > system would create the snapshot, and then notify >> the applications that >> > it''s done. >> > >> Why would an application need to be notified? I think >> you''re under the >> misconception that something happens when a ZFS >> snapshot is taken. >> NOTHING happens when a snapshot is taken (OK, well, >> there is the >> snapshot reference name created). Blocks aren''t moved >> around, we don''t >> copy anything, etc. Applications have no need to "do >> anything" before a >> snapshot it taken. > > It would be nice to have applications request to be notified > before a snapshot is taken, and when that have requested > notification have acknowledged that they''re ready, the snapshot > would be taken; and then another notification sent that it was > taken. Prior to indicating they were ready, the apps could > have achieved a logically consistent on disk state. That > would eliminate the need for (for example) separate database > backups, if you could have a snapshot with the database on it > in a consistent state.Any software dependent on cooperating with the filesystem to ensure that the files are consistent in a snapshot fails the cord-yank test (which is equivalent to the "processor explodes" test and the "power supply bursts into flames" test and the "disk drive shatters" test and so forth). It can''t survive unavoidable physical-world events. Conversely, any scheme for a program writing to its files that PASSES those tests will be fine with arbitrary snapshots, too. For that matter, remember that the "snapshot" may be taken on a zfs server on another continent which is making the storage available via iScsi; there''s currently no notification channel to tell the software the snapshot is happening. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote:>> It would be nice to have applications request to be notified >> before a snapshot is taken, and when that have requested >> notification have acknowledged that they''re ready, the snapshot >> would be taken; and then another notification sent that it was >> taken. Prior to indicating they were ready, the apps could >> have achieved a logically consistent on disk state. That >> would eliminate the need for (for example) separate database >> backups, if you could have a snapshot with the database on it >> in a consistent state. > > Any software dependent on cooperating with the filesystem to ensure that > the files are consistent in a snapshot fails the cord-yank test (which is > equivalent to the "processor explodes" test and the "power supply bursts > into flames" test and the "disk drive shatters" test and so forth). It > can''t survive unavoidable physical-world events.It can, if said software can roll back to the last consistent state. That may or may not be "recent" wrt a snapshot. If an application is very active, it''s possible that many snapshots may be taken, none of which are actually in a state the application can use to recover from. Rendering snapshots much less effective. Also, just administratively, and perhaps legally, it''s highly desirable to know that the time of a snapshot is the actual time that application state can be recovered to or referenced to. Also, if an application cannot survive a cord-yank test, it might be even more highly desirable that snapshots be a stable that from which the application can be restarted. A notification mechanism is pretty desirable, IMHO.
On Fri, July 16, 2010 14:07, Frank Cusack wrote:> On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote: >>> It would be nice to have applications request to be notified >>> before a snapshot is taken, and when that have requested >>> notification have acknowledged that they''re ready, the snapshot >>> would be taken; and then another notification sent that it was >>> taken. Prior to indicating they were ready, the apps could >>> have achieved a logically consistent on disk state. That >>> would eliminate the need for (for example) separate database >>> backups, if you could have a snapshot with the database on it >>> in a consistent state. >> >> Any software dependent on cooperating with the filesystem to ensure that >> the files are consistent in a snapshot fails the cord-yank test (which >> is >> equivalent to the "processor explodes" test and the "power supply bursts >> into flames" test and the "disk drive shatters" test and so forth). It >> can''t survive unavoidable physical-world events. > > It can, if said software can roll back to the last consistent state. > That may or may not be "recent" wrt a snapshot. If an application is > very active, it''s possible that many snapshots may be taken, none of > which are actually in a state the application can use to recover from. > Rendering snapshots much less effective.Wait, if the application can in fact survive the "cord pull" test then by definition of "survive", all the snapshots are useful. They''ll be everything consistent that was committed to disk by the time of the yank (or snapshot); which, it seems to me, is the very best that anybody could hope for.> Also, just administratively, and perhaps legally, it''s highly desirable > to know that the time of a snapshot is the actual time that application > state can be recovered to or referenced to.Maybe, but since that''s not achievable for your core corporate asset (the database), I think of it as a pipe dream rather than a goal.> Also, if an application cannot survive a cord-yank test, it might be > even more highly desirable that snapshots be a stable that from which > the application can be restarted.If it cannot survive a cord-yank test, it should not be run, ever, by anybody, for any purpose more important than playing a game. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
On 7/16/10 3:07 PM -0500 David Dyer-Bennet wrote:> > On Fri, July 16, 2010 14:07, Frank Cusack wrote: >> On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote: >>>> It would be nice to have applications request to be notified >>>> before a snapshot is taken, and when that have requested >>>> notification have acknowledged that they''re ready, the snapshot >>>> would be taken; and then another notification sent that it was >>>> taken. Prior to indicating they were ready, the apps could >>>> have achieved a logically consistent on disk state. That >>>> would eliminate the need for (for example) separate database >>>> backups, if you could have a snapshot with the database on it >>>> in a consistent state. >>> >>> Any software dependent on cooperating with the filesystem to ensure that >>> the files are consistent in a snapshot fails the cord-yank test (which >>> is >>> equivalent to the "processor explodes" test and the "power supply bursts >>> into flames" test and the "disk drive shatters" test and so forth). It >>> can''t survive unavoidable physical-world events. >> >> It can, if said software can roll back to the last consistent state. >> That may or may not be "recent" wrt a snapshot. If an application is >> very active, it''s possible that many snapshots may be taken, none of >> which are actually in a state the application can use to recover from. >> Rendering snapshots much less effective. > > Wait, if the application can in fact survive the "cord pull" test then by > definition of "survive", all the snapshots are useful.Useful, yes, but you missed my point about recency. They may not be as useful as they could be, and depending on how data changes older data or transactions may be unrecoverable due to an inconsistent snapshot.> They''ll be > everything consistent that was committed to disk by the time of the yank > (or snapshot); which, it seems to me, is the very best that anybody could > hope for.This is true only if transactions are journaled somehow, and thus a snapshot could return the application to it''s current state -1.>> Also, just administratively, and perhaps legally, it''s highly desirable >> to know that the time of a snapshot is the actual time that application >> state can be recovered to or referenced to. > > Maybe, but since that''s not achievable for your core corporate asset (the > database), I think of it as a pipe dream rather than a goal.Ah, because we can''t achieve this ideal for some very critical application, we shouldn''t bother getting there for other applications.>> Also, if an application cannot survive a cord-yank test, it might be >> even more highly desirable that snapshots be a stable that from which >> the application can be restarted. > > If it cannot survive a cord-yank test, it should not be run, ever, by > anybody, for any purpose more important than playing a game.Nice ideal world you live in ... wish I were there. It''s not as if a notification mechanism somehow makes things worse for applications that don''t use it.
Richard Elling
2010-Jul-16 22:57 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
On Jul 15, 2010, at 4:48 AM, BM wrote:> On Thu, Jul 15, 2010 at 10:53 AM, Garrett D''Amore <garrett at nexenta.com> wrote: >> The *code* is probably not going away (even updates to the kernel). >> Even if the community dies, is killed, or commits OGB induced suicide. > > 1. You used correct word: "probably".The sun will probably rise tomorrow :-)> 2. No community = stale outdated code.But there is a community. What is lacking is that Oracle, in their infinite wisdom, has stopped producing OpenSolaris developer binary releases. Not to be outdone, they''ve stopped other OS releases as well. Surely, this is a temporary situation. Of the remaining distro builders who offer updated builds based on OpenSolaris code, I''m proud to be a part of the Nexenta team.>> There is another piece I''ll add: even if Oracle were to stop releasing >> ZFS or OpenSolaris source code, there are enough of us with a vested >> interest (commercial!) in its future that we would continue to develop >> it outside of Oracle. It won''t just go stagnant and die. > > So you''re saying "let''s fork it".No. What he is saying is that distro builders need to step up to the challenge and release distros. For some reason (good marketing) people seem to think that Linux == Red Hat. Clearly, that is not the case. Please, do not confuse distribution of binaries with distribution of source.>> I believe I can safely say that Nexenta is committed to the continued development and enhancement of this code base -- and to doing so in the open. > Yeah, and Nexenta is also committed to backport newest updates from > 140 and younger builds just back to snv_134. So I can imagine that > soon new OS from Nexenta will be called "Super Nexenta Version 134". > :-)Please. The NexentaStor OS 3.0.3 release is b134f. b134g will be next. We do not expect the OpenSolaris community to replace b135 with Nexenta Core 3.0.3. Rather, we would very much like to see Oracle continue to produce developer distributions which more closely track the source changes. NexentaStor has a very focused market. The losers in the Oracle deaf-mute game are the people who want to use OpenSolaris for applications other than a NAS server.> Currently from what I see, I think Nexenta will also die eventually.Indeed. We will all die. And the good news is that someone will pick up the knowledge and evolve. Darwin was right. This is the circle of life.> Because of BTRFS for Linux, Linux''s popularity itself and also thanks > to the Oracle''s help.BTRFS does not matter until it is a primary file system for a dominant distribution. From what I can tell, the dominant Linux distribution file system is ext. That will change some day, but we heard the same story you are replaying about BTRFS from the Reiser file system aficionados and the XFS evangelists. There is absolutely no doubt that Solaris will use ZFS as its primary file system. But there is no internal or external force causing Red Hat to change their primary file system from ext. -- richard
>>>>> "re" == Richard Elling <richard at nexenta.com> writes:re> we would very much like to see Oracle continue to produce re> developer distributions which more closely track the source re> changes. I''d rather someone else than Oracle did it. Until someone else is doing the ``building'''', whatever that entails all the way from Mercurial to DVD, we will never know if the source we have is complete enough to do a fork if we need to. I realize everyone has in their heads, FORK == BAD. Yes, forks are usually bad, but the *ability to make forks* is good, because it ``decouples the investments our businesses make in OpenSolaris/ZFS from the volatility of Sun and Oracle''s business cycle,'''' to paraphrase some blog comment. Particularly when you are dealing with datasets so large it might cost tens of thousands to copy them into another format than ZFS, it''s important to have a >2 year plan for this instead of being subject to ``I am altering the deal. Pray I don''t alter it any further.'''' Nexenta being stuck at b134, and secret CVE fixes, does not look good. Though yeah, it looks better than it would if Nexenta didn''t exist. IMHO it''s important we don''t get stuck running Nexenta in the same spot we''re now stuck with OpenSolaris: with a bunch of CDDL-protected source that few people know how to use in practice because the build procedure is magical and secret. This is why GPL demands you release ``all build scripts''''! One good way to help make sure you''ve the ability to make a fork, is to get the source from one organization and the binary distribution from another. As long as they''re not too collusive, you can relax and rely on one of them to complain to the other. Another way is to use a source-based distribution like Gentoo or BSD, where the distributor includes a deliverable tool that produces bootable DVD''s from the revision control system, and ordinary contributors can introspect these tools and find any binary blobs that may exist. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100718/a1090f5f/attachment.bin>
On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin <carton at ivy.net> wrote:> IMHO it''s important we don''t get stuck running Nexenta in the same > spot we''re now stuck with OpenSolaris: with a bunch of CDDL-protected > source that few people know how to use in practice because the build > procedure is magical and secret. ?This is why GPL demands you release > ``all build scripts''''!I don''t know if the GPL demands that but I think we''ve all learned a lesson from Oracle/Sun regarding that. Releasing source code and expecting people to figure out the rest could be called "open source" but it won''t create the kind of collaboration people usually expect. For any "fork" (or whatever people want to call it, there are many shades of gray) to succeed, the release and documentation of the build/testing infrastructure used to create the end product is as important as the main source code itself. I''m not saying Oracle/Sun should have released all and everything they used to create the OpenSolaris binary distribution (their product). I''m saying they should have first stopped treating it as a proprietary product and then released those bits to further forster external collaboration. But now that''s all history and discussing about how things could have been done won''t change anything. I hope that if we want to be able to move OpenSolaris to the next level, we can this time avoid falling into the same mouse trap. -- Giovanni Tirloni gtirloni at sysdroid.com
Pasi Kärkkäinen
2010-Jul-19 10:01 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
On Sat, Jul 17, 2010 at 12:57:40AM +0200, Richard Elling wrote:> > > Because of BTRFS for Linux, Linux''s popularity itself and also thanks > > to the Oracle''s help. > > BTRFS does not matter until it is a primary file system for a dominant distribution. > From what I can tell, the dominant Linux distribution file system is ext. That will > change some day, but we heard the same story you are replaying about BTRFS > from the Reiser file system aficionados and the XFS evangelists. There is > absolutely no doubt that Solaris will use ZFS as its primary file system. But there is > no internal or external force causing Red Hat to change their primary file system > from ext. >Redhat Fedora 13 includes BTRFS, but it''s not used as a default (yet). F13 also supports yum (package management) rollback using BTRFS snapshots. I''m not sure if Fedora 14 will have BTRFS as a default.. RHEL6 beta also includes BTRFS support (tech preview), but again, not enabled as a default filesystem. Upcoming Ubuntu 10.10 will use BTRFS as a default. That''s the status in Linux world, afaik :) -- Pasi
Giovanni Tirloni <gtirloni at sysdroid.com> wrote:> On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin <carton at ivy.net> wrote: > > IMHO it''s important we don''t get stuck running Nexenta in the same > > spot we''re now stuck with OpenSolaris: with a bunch of CDDL-protected > > source that few people know how to use in practice because the build > > procedure is magical and secret. ?This is why GPL demands you release > > ``all build scripts''''! > > I don''t know if the GPL demands that but I think we''ve all learned a > lesson from Oracle/Sun regarding that.The missing requirement to provide build scripts is a drawback of the CDDL. ...But believe me that the GPL would not help you here, as the GPL cannot force the original author (in this case Sun/Oracle or whoever) to supply the scripts in question.> Releasing source code and expecting people to figure out the rest > could be called "open source" but it won''t create the kind of > collaboration people usually expect.As mentioned above, there is no license that can help you here. The OpenSource definition frm the OSI is a general guidline that contains rules to decide whether a license is free enough to get the OSS sticker.> For any "fork" (or whatever people want to call it, there are many > shades of gray) to succeed, the release and documentation of the > build/testing infrastructure used to create the end product is as > important as the main source code itself. > > I''m not saying Oracle/Sun should have released all and everything they > used to create the OpenSolaris binary distribution (their product). > I''m saying they should have first stopped treating it as a proprietary > product and then released those bits to further forster external > collaboration. But now that''s all history and discussing about how > things could have been done won''t change anything.You unfortunately cannot enforce the author or Copyright holder....> I hope that if we want to be able to move OpenSolaris to the next > level, we can this time avoid falling into the same mouse trap.This is a community issue. Do we have people that are willing to help? J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
Anil Gulecha
2010-Jul-19 10:27 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
On Mon, Jul 19, 2010 at 3:31 PM, Pasi K?rkk?inen <pasik at iki.fi> wrote:> > Upcoming Ubuntu 10.10 will use BTRFS as a default. >Though there was some discussion around this, I don''t think the above is a given. The ubuntu devs would look at the status of the project, and decide closer to the release. ~Anil PS : Unless I missed any recent announcement by Ubuntu..
Dick Hoogendijk
2010-Jul-19 10:43 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
On 19-7-2010 12:27, Anil Gulecha wrote:> On Mon, Jul 19, 2010 at 3:31 PM, Pasi K?rkk?inen<pasik at iki.fi> wrote: >> Upcoming Ubuntu 10.10 will use BTRFS as a default. > Though there was some discussion around this, I don''t think the above > is a given. The ubuntu devs would look at the status of the project, > and decide closer to the release.Ubuntu always likes to be "on the edge" even if btrfs is far from being ''stable'' I would not want to run a release that does this. Servers need stability and reliability. Btrfs is far from this.
On Mon, Jul 19, 2010 at 7:12 AM, Joerg Schilling <Joerg.Schilling at fokus.fraunhofer.de> wrote:> Giovanni Tirloni <gtirloni at sysdroid.com> wrote: > >> On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin <carton at ivy.net> wrote: >> > IMHO it''s important we don''t get stuck running Nexenta in the same >> > spot we''re now stuck with OpenSolaris: with a bunch of CDDL-protected >> > source that few people know how to use in practice because the build >> > procedure is magical and secret. ?This is why GPL demands you release >> > ``all build scripts''''! >> >> I don''t know if the GPL demands that but I think we''ve all learned a >> lesson from Oracle/Sun regarding that. > > The missing requirement to provide build scripts is a drawback of the CDDL. > > ...But believe me that the GPL would not help you here, as the GPL cannot > force the original author (in this case Sun/Oracle or whoever) to supply the > scripts in question.I don''t have any doubts that the GPL (or any other license) would not prevent the current situation. It''s more of a strategic/business decision.>> I hope that if we want to be able to move OpenSolaris to the next >> level, we can this time avoid falling into the same mouse trap. > > This is a community issue. > > Do we have people that are willing to help?Yep! Just need a little guidance in the beginning :) -- Giovanni Tirloni gtirloni at sysdroid.com
Andrej Podzimek
2010-Jul-19 11:26 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
> Ubuntu always likes to be "on the edge" even if btrfs is far from being > ''stable'' I would not want to run a release that does this. Servers need > stability and reliability. Btrfs is far from this.Well, it seems to me that this is a well-known and very popular ?circle in proving?: A: XYZ is far from stability and reliability. B: Are you sure? Have you had any serious issues with XYZ? Are there any failure reports and statistics? What are you comparing XYZ with? A: How can I be sure? I cannot give XYZ a try, because it is so far from stability and reliability... I run ArchLinux with Btrfs and OpenSolaris with ZFS. I haven''t had a serious issue with any of them so far. (Well, in fact I had one issue with OpenSolaris in QEMU, but that''s a well-known story, probably not related to ZFS: http://www.neuhalfen.name/2009/08/05/OpenSolaris_KVM_and_large_IDE_drives/.) As far as Btrfs is concerned, I am perfectly satisfied with it, as far as performance and features are concerned. On the other hand, Btrfs still has quite a lot of issues that need to be dealt with. For example, 1) Btrfs does not have mature and user-friendly command-line tools. AFAIK, you can only list your snapshots and subvolumes by grep''ing the tree dump. ;-) 2) there are still bugs that *must* be fixed before Btrfs can be seriously considered: http://www.mail-archive.com/linux-btrfs at vger.kernel.org/msg05130.html Undoubtedly, ZFS is currently much more mature and usable than Btrfs. However, Btrfs can evolve very quickly, considering the huge community around Linux. For example, EXT4 was first released in late 2006 and I first deployed it (with a stable on-disk format) in early 2009. Andrej -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6343 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100719/3764abc4/attachment.bin>
On 12/07/2010 16:32, Erik Trimble wrote:> > ZFS is NOT automatically ACID. There is no guaranty of commits for > async write operations. You would have to use synchronous writes to > guaranty commits. And, furthermore, I think that there is a strong ># zfs set sync=always pool will force all I/O (async or sync) to be written synchronously. ps. still, I''m not saying it would made ZFS ACID. -- Robert Milkowski http://milek.blogspot.com
Robert Milkowski
2010-Jul-19 12:29 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
On 16/07/2010 23:57, Richard Elling wrote:> On Jul 15, 2010, at 4:48 AM, BM wrote: > > >> 2. No community = stale outdated code. >> > But there is a community. What is lacking is that Oracle, in their infinite > wisdom, has stopped producing OpenSolaris developer binary releases. > Not to be outdone, they''ve stopped other OS releases as well. Surely, > this is a temporary situation. > >AFAIK the dev OSOL releases are still being produced - they haven''t been made public since b134 though. -- Robert Milkowski http://milek.blogspot.com
Frank Middleton
2010-Jul-19 13:49 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
On 07/19/10 07:26, Andrej Podzimek wrote:> I run ArchLinux with Btrfs and OpenSolaris with ZFS. I haven''t had a > serious issue with any of them so far.Moblin/Meego ships with btrfs by default. COW file system on a cell phone :-). Unsurprisingly for a read-mostly file system it seems pretty stable. There''s an interesting discussion about btrfs on Meego at http://lwn.net/Articles/387196/> Undoubtedly, ZFS is currently much more mature and usable than Btrfs.Agreed, but it''s not just ZFS, though. It''s the packaging system, beadm, stmf, the whole works. A simple yum update can be a terrifying experience and almost impossible to undo. And updating to a major new Linux release? Almost as bad as updating MSWindows. Open Solaris as an administerable system is simply years ahead of anything else.> However, Btrfs can evolve very quickly, considering the huge community > around Linux. For example, EXT4 was first released in late 2006 and I > first deployed it (with a stable on-disk format) in early 2009.But the infrastructure to make use of a ZFS-like manager simply isn''t there. As a Linux and Solaris developer and user of both, I''d take Solaris any day and so would everyone I know. But going back to the original topic, the tea leaves seem to be saying that Oracle is interested primarily in Solaris as a robust server OS and probably not so much for the desktop where there realistically isn''t going to be much revenue. But it would be a bad gamble if they lose a lot of mind-share. Legal issues over ZFS make it even worse. I get calls for help converting MSWindows applications and servers to Linux. ZFS and all the other goodies make a compelling case for Solaris (and Sun/Oracle hardware) instead but the uncertainties make it a hard sell. Oracle are you listening?
Hi, if you are regarding only changes to a file as transactions, then flock() and fsync() is sufficient to reach ACID level with ZFS. To achieve transactions which change multiple files, you need flock(), fsync() and use snapshots for transaction commit or rollback for transaction abort. But the performance and grade of parallelity sucks. with zfs sync=always pool you will not get transactions, because you have no chance to commit or abort the transaction. The I in ACID needs isolated access to the data, but you cannot see, when the data is consistent (C). Therefore we would need a transaction interface to ZFS to gain ACID capability. It is not possible to get it with the current API to ZFS, without the use of a database. Regards, Ulrich ----- Original Message ----- From: milek at task.gda.pl To: zfs-discuss at opensolaris.org Sent: Monday, July 19, 2010 2:26:37 PM GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna Subject: Re: [zfs-discuss] Legality and the future of zfs... On 12/07/2010 16:32, Erik Trimble wrote:> > ZFS is NOT automatically ACID. There is no guaranty of commits for > async write operations. You would have to use synchronous writes to > guaranty commits. And, furthermore, I think that there is a strong ># zfs set sync=always pool will force all I/O (async or sync) to be written synchronously. ps. still, I''m not saying it would made ZFS ACID. -- Robert Milkowski http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 19 Jul 2010, Joerg Schilling wrote:> > The missing requirement to provide build scripts is a drawback of the CDDL. > > ...But believe me that the GPL would not help you here, as the GPL cannot > force the original author (in this case Sun/Oracle or whoever) to supply the > scripts in question.There is also no GPL requirement for the scripts to work for anyone other than the person who wrote them. The only requirement is that they are what is normally used for development. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> > ap> 2) there are still bugs that *must* be fixed before Btrfs can > ap> be seriously considered: > ap> http://www.mail-archive.com/linux-btrfs at vger.kernel.org/msg05130.html > > I really don''t think that''s a show-stopper. He filled the disk with > 2KB files. HE FILLED THE DISK WITH 2KB FILES.Well, if there was a 50% overhead, then fine. That can happen. 80%? All right, still good... But what actually happened does not seem acceptable to me.> It''s more, ``you think you''re so clever, but you''re not, see?'''' I''m > not saying not to fix it. I''m saying it''s not a show-stopper.I''m not saying it''s a showstopper. I just don''t think anyone could seriously consider a production deployment before this is fixed. Edward Shishkin is the maintainer and co-author of Reiser4, which has not been accepted into the kernel yet, despite the fact that many people have been using it successfully for years. (I am also one of the Reiser4 users and run it on some laptops I maintain.) So Edward''s reaction is not surprising. ;-) It''s like ?hey! My stable filesystem stays out, but various experiments (EXT4, NILFS2, Btrfs, ...) are let in! How comes?? Andrej -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6343 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100719/de0aa125/attachment.bin>
On Wed, Jul 14 at 23:51, Tim Cook wrote:> Out of the fortune 500, I''d be willing to bet there''s exactly zero > companies that use whitebox systems, and for a reason. > --TimSure, some core SAP system or HR data warehouse runs on name-brand gear, and maybe they have massive SANs with various capabilities that run on name brand gear as well, but I''d guess that most every fortune 500 company buys some large number of generic machines as well. (generic being anything from newegg build-it-yourself to the bargain SKUs from major PC companies that may not have mission-critical support contracts associated with them) Any company that believes it can add more value in their IT supply chain than the vendor they''d be buying from would be foolish not to put energy into that space (if they can "afford" to.) Google is but a single example, though I am sure there are others. -- Eric D. Mudama edmudama at mail.bounceswoosh.org
On Mon, 2010-07-19 at 17:54 -0600, Eric D. Mudama wrote:> On Wed, Jul 14 at 23:51, Tim Cook wrote: > > Out of the fortune 500, I''d be willing to bet there''s exactly zero > > companies that use whitebox systems, and for a reason. > > --Tim > > Sure, some core SAP system or HR data warehouse runs on name-brand > gear, and maybe they have massive SANs with various capabilities that > run on name brand gear as well, but I''d guess that most every fortune > 500 company buys some large number of generic machines as well. > > (generic being anything from newegg build-it-yourself to the bargain > SKUs from major PC companies that may not have mission-critical > support contracts associated with them) > > Any company that believes it can add more value in their IT supply > chain than the vendor they''d be buying from would be foolish not to > put energy into that space (if they can "afford" to.) Google is but a > single example, though I am sure there are others. >They may *believe* they can, but no one ever does, because you trade increased manpower for up-front hardware cost. And companies aren''t willing to do that. I''ve been around a large number of different environments (finance, publishing, development, ISP, ASP, even HW manufacturing), and the only place I''ve ever seen non-name-brand servers in a datacenter/server room production configuration is for Google-like massive deployments. Whitebox machines proliferate in SQE and desktop environs where they''re burnable and disposable. But for any kind of production use (or those with a Deployment staging or QA setup), I''ve only ever seen brand-names, WITH the service contract fully paid up. IT departments are *always* critically understaffed, and in order to make a whitebox deployment successful for production use, you need dedicated staff for that - PERMANENT staff. Companies don''t do that. Admins are just so chronically overworked that they have no ability to spend any extra time on making a whitebox setup usable for production, even if they have the expertise. And you better believe that us Admins won''t even think about production support for a box that doesn''t have a service contract on it. Hardware and Software. Because no matter how good you are, you can''t think of everything (or, if you can, it takes awhile) - and, the 20 hours it just took you to fix that machine could have been 2 hours if it had a service contract. Doesn''t take too long for that kind of math to blow out any savings whiteboxes may have had. Worst case, someone goes and buys Dell. :-) -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Erik''s experiences echo mine. I''ve never seen a white-box in a medium to large company that I''ve visited. Always a name brand. His comments on sysadmin staffing are dead on. Jim Litchfield Oracle Consulting -------------------- On 7/19/2010 5:35 PM, Erik Trimble wrote:> On Mon, 2010-07-19 at 17:54 -0600, Eric D. Mudama wrote: > >> On Wed, Jul 14 at 23:51, Tim Cook wrote: >> >>> Out of the fortune 500, I''d be willing to bet there''s exactly zero >>> companies that use whitebox systems, and for a reason. >>> --Tim >>> >> Sure, some core SAP system or HR data warehouse runs on name-brand >> gear, and maybe they have massive SANs with various capabilities that >> run on name brand gear as well, but I''d guess that most every fortune >> 500 company buys some large number of generic machines as well. >> >> (generic being anything from newegg build-it-yourself to the bargain >> SKUs from major PC companies that may not have mission-critical >> support contracts associated with them) >> >> Any company that believes it can add more value in their IT supply >> chain than the vendor they''d be buying from would be foolish not to >> put energy into that space (if they can "afford" to.) Google is but a >> single example, though I am sure there are others. >> >> > They may *believe* they can, but no one ever does, because you trade > increased manpower for up-front hardware cost. And companies aren''t > willing to do that. > > > I''ve been around a large number of different environments (finance, > publishing, development, ISP, ASP, even HW manufacturing), and the only > place I''ve ever seen non-name-brand servers in a datacenter/server room > production configuration is for Google-like massive deployments. > Whitebox machines proliferate in SQE and desktop environs where they''re > burnable and disposable. But for any kind of production use (or those > with a Deployment staging or QA setup), I''ve only ever seen brand-names, > WITH the service contract fully paid up. > > > IT departments are *always* critically understaffed, and in order to > make a whitebox deployment successful for production use, you need > dedicated staff for that - PERMANENT staff. Companies don''t do that. > Admins are just so chronically overworked that they have no ability to > spend any extra time on making a whitebox setup usable for production, > even if they have the expertise. And you better believe that us Admins > won''t even think about production support for a box that doesn''t have a > service contract on it. Hardware and Software. Because no matter how > good you are, you can''t think of everything (or, if you can, it takes > awhile) - and, the 20 hours it just took you to fix that machine could > have been 2 hours if it had a service contract. Doesn''t take too long > for that kind of math to blow out any savings whiteboxes may have had. > > Worst case, someone goes and buys Dell. :-) > > > > > >-- Oracle <http://www.oracle.com> James Litchfield | Senior Consultant Phone: +1 4082237059 <tel:+1%204082237059> | Mobile: +1 4082180790 <tel:+1%204082180790> Oracle Oracle ACS California Green Oracle <http://www.oracle.com/commitment> Oracle is committed to developing practices and products that help protect the environment -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100719/cf89f3ed/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: oracle_sig_logo.gif Type: image/gif Size: 658 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100719/cf89f3ed/attachment.gif> -------------- next part -------------- A non-text attachment was scrubbed... Name: green-for-email-sig_0.gif Type: image/gif Size: 356 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100719/cf89f3ed/attachment-0001.gif>
Edward Ned Harvey
2010-Jul-20 03:48 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Pasi K?rkk?inen > > Redhat Fedora 13 includes BTRFS, but it''s not used as a default (yet). > > RHEL6 beta also includes BTRFS support (tech preview), but again, > > Upcoming Ubuntu 10.10 will use BTRFS as a default.As of 3 days ago, although BTRFS is shipping with some OSes, it''s not considered stable or production ready. "Use it if you don''t care about the data on your box or do regular backups." http://permalink.gmane.org/gmane.comp.file-systems.btrfs/6145 That being said, a lot of people are using it, generally without issue. I think the key summary here is: You should be ok as long as you backup regularly, and you''re not using it in an environment where reliability is critical. One of the present drawbacks is that there is presently no "fsck" for btrfs. And it can''t scrub or anything like that. If you get a filesystem error, it cannot be fixed. I wonder if Netapp is going to sue Linus? ;-)
Linder, Doug
2010-Jul-21 21:29 UTC
[zfs-discuss] carrying on [was: Legality and the future of zfs...]
Andrej Podzimek wrote:> 1) Btrfs does not have mature and user-friendly command-line > tools. AFAIK, you can only list your snapshots and subvolumes by > grep''ing the tree dump. ;-)I haven''t looked closely at the btrfs commands recently, but from what I''ve seen, they''re really amazingly ugly. The worst sort of parameter-ridden, fiddly, picky, completely non-mnemonic unix commands. And I think that''s a huge, huge drawback - more than most people think. The traditional hacker mindset is to leave such nicities as usable commands to last, if ever. "If it was hard to write, it should be hard to use!" seems to be the philosophy. I think that attitude really misses the point that even geeks are humans too, and even experienced unix admins hate really complex commands. I, for one, can say without a doubt that the simplicity and elegance of the ZFS commands was one of the major selling points. Might I have eventually been persuaded to use ZFS based just on its features alone? Maybe. But I would have been dragged kicking and screaming, not wanting to learn Yet Another Set of Incomprehensible Commands. If I had started reading the man page and immediately been lost in a sea of parameters and sixteen different interrelated commands, I wonder if I would even have bothered pursuing it, or if I would have just put it in the "could be interesting, maybe I''ll look at it someday" category. One of the main reasons I love ZFS so much is because I hated Veritas so much, and one of the reasons I hated Veritas so much was because doing even the smallest thing required a cheat sheet ten pages long. I never really felt like I "got" Veritas - I just followed cryptic recipes given to me by other people. But ZFS... I grok ZFS. Partly because of its design elegance, partly because the volume manager layer is gone, but largely because I can understand the commands. I''ll never forget the excitement I felt when I saw the video of at opensolaris.org demonstrating how simple the commands were. I''ll never forget how happy I was when I tried it the first time and, damn - it worked! *That* easy! I was ecstatic. If btrfs doesn''t *seriously* brush up its commands, I''ll probably be very resistant to learning it. At my age and with my level of free time, learning another super-complex set of computer commands just isn''t exactly high on my list. But I do have a great idea of how to improve the situation. Here''s my suggestion for btrfs: First, rename it BFS and just get rid of the silly, clumsy acronym and fudged pronunciation. No one cares that it''s it''s b-tree or whatever. Second, and most importantly, BFS should STEAL ZFS''S COMMND SYNTAX, AS VERBATIM AS POSSIBLE. Why not? It''s already well-designed and easy. Lots of people already know it. Copyright? Well, the ZFS license might have technical issues with the code itself, but I don''t think there would be any legal restriction to simply stealing the names of the commands and their syntax. Rename "zpool" to "bpool", rename "zfs to bfs", and - voila! the problem of arcane syntax is gone. I can''t see Oracle dragging anyone into court and trying to sue for copying some command syntax. OK, of course I realize it wouldn''t be that simple and that a fair amount of coding would be involved. But it would be interface and parsing code, not the heavy-duty black magic. More-junior developers could handle it while the more senior ones kept working on functionality. That''s my idea, and I think it''s brilliant. :) My $0.02. Doug Linder ---------- Learn more about Merchant Link at www.merchantlink.com. THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are proprietary and confidential information intended only for the use of the recipient(s) named above. If you are not the intended recipient, you may not print, distribute, or copy this message or any attachments. If you have received this communication in error, please notify the sender by return e-mail and delete this message and any attachments from your computer.