I am Starting to put together a home NAS server that will have the following roles: (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 HD streams at a time. These will be streamed live to the NAS box during recording. (2) Playback TV (could be stream being recorded, could be others) to 3 or more extenders (3) Hold a music repository (4) Hold backups from windows machines, mac (time machine), linux. (5) Be an iSCSI target for several different Virtual Boxes. Function 4 will use compression and deduplication. Function 5 will use deduplication. I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 mirrored boot drives. I have been reading these forums off and on for about 6 months trying to figure out how to best piece together this system. I am first trying to select the CPU. I am leaning towards AMD because of ECC support and power consumption. For items such as de-dupliciation, compression, checksums etc. Is it better to get a faster clock speed or should I consider more cores? I know certain functions such as compression may run on multiple cores. I have so far narrowed it down to: AMD Phenom II X2 550 Black Edition Callisto 3.1GHz and AMD Phenom X4 9150e Agena 1.8GHz Socket AM2+ 65W Quad-Core As they are roughly the same price. -- This message posted from opensolaris.org
* Brian (broconne at vt.edu) wrote:> I am Starting to put together a home NAS server that will have the > following roles: > > (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to > 4 or 5 HD streams at a time. These will be streamed live to the NAS > box during recording. (2) Playback TV (could be stream being > recorded, could be others) to 3 or more extenders (3) Hold a music > repository (4) Hold backups from windows machines, mac (time machine), > linux. (5) Be an iSCSI target for several different Virtual Boxes. > > Function 4 will use compression and deduplication. Function 5 will > use deduplication. > > I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 > mirrored boot drives. > > I have been reading these forums off and on for about 6 months trying > to figure out how to best piece together this system. > > I am first trying to select the CPU. I am leaning towards AMD because > of ECC support and power consumption.I can''t comment on most of your question, but I will point you at: http://blogs.sun.com/mhaywood/entry/powernow_for_solaris I *think* the cpu''s you''re looking at won''t be an issue but just something to be aware of when looking at AMD kit (especially if you want to manage the processor speed). Cheers, -- Glenn
I would go with cores (threads) rather than clock speed here. My home system is a 4-core AMD @ 1.8Ghz and performs well. I wouldn''t use drives that big and you should be aware of the overheads of RaidZ[x]. -marc On Thu, Feb 4, 2010 at 6:19 PM, Brian <broconne at vt.edu> wrote:> I am Starting to put together a home NAS server that will have the > following roles: > > (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or > 5 HD streams at a time. These will be streamed live to the NAS box during > recording. > (2) Playback TV (could be stream being recorded, could be others) to 3 or > more extenders > (3) Hold a music repository > (4) Hold backups from windows machines, mac (time machine), linux. > (5) Be an iSCSI target for several different Virtual Boxes. > > Function 4 will use compression and deduplication. > Function 5 will use deduplication. > > I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 > mirrored boot drives. > > I have been reading these forums off and on for about 6 months trying to > figure out how to best piece together this system. > > I am first trying to select the CPU. I am leaning towards AMD because of > ECC support and power consumption. > > For items such as de-dupliciation, compression, checksums etc. Is it > better to get a faster clock speed or should I consider more cores? I know > certain functions such as compression may run on multiple cores. > > I have so far narrowed it down to: > > AMD Phenom II X2 550 Black Edition Callisto 3.1GHz > and > AMD Phenom X4 9150e Agena 1.8GHz Socket AM2+ 65W Quad-Core > > As they are roughly the same price. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100204/498fdc96/attachment.html>
Thanks for the reply. Are cores better because of the compression/deduplication being mult-threaded or because of multiple streams? It is a pretty big difference in clock speed - so curious as to why core would be better. Glad to see your 4 core system is working well for you - so seems like I won''t really have a bad choice. Why avoid large drives? Reliability reasons? My main thought on that is that there is a 3 year warranty and I am building raidz2 because I expect failure. Or are there other reasons to avoid large drives? I thought I understood the overhead.. The write and read speeds should be roughly that of the slowest disk? Thanks. -- This message posted from opensolaris.org
Put your money into RAM, especially for dedup. -- richard On Feb 4, 2010, at 3:19 PM, Brian wrote:> I am Starting to put together a home NAS server that will have the following roles: > > (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 HD streams at a time. These will be streamed live to the NAS box during recording. > (2) Playback TV (could be stream being recorded, could be others) to 3 or more extenders > (3) Hold a music repository > (4) Hold backups from windows machines, mac (time machine), linux. > (5) Be an iSCSI target for several different Virtual Boxes. > > Function 4 will use compression and deduplication. > Function 5 will use deduplication. > > I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 mirrored boot drives. > > I have been reading these forums off and on for about 6 months trying to figure out how to best piece together this system. > > I am first trying to select the CPU. I am leaning towards AMD because of ECC support and power consumption. > > For items such as de-dupliciation, compression, checksums etc. Is it better to get a faster clock speed or should I consider more cores? I know certain functions such as compression may run on multiple cores. > > I have so far narrowed it down to: > > AMD Phenom II X2 550 Black Edition Callisto 3.1GHz > and > AMD Phenom X4 9150e Agena 1.8GHz Socket AM2+ 65W Quad-Core > > As they are roughly the same price. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Le 05/02/10 01:00, Brian a écrit : Thanks for the reply. Are cores better because of the compression/deduplication being mult-threaded or because of multiple streams? It is a pretty big difference in clock speed - so curious as to why core would be better. Glad to see your 4 core system is working well for you - so seems like I won''t really have a bad choice. Why avoid large drives? Reliability reasons? My main thought on that is that there is a 3 year warranty and I am building raidz2 because I expect failure. Or are there other reasons to avoid large drives? I thought I understood the overhead.. The write and read speeds should be roughly that of the slowest disk? Thanks. From what I saw, ZFS scales terribly well with multiple cores. If you want to send/receive your filesystems through ssh to another machine, speed matters since ssh only uses one core (but then you can always use netcat). On Xeon E5520 running at 2.27 GHz we achieve around 70/80 MB/s ssh throughput. For dedup, you want lots of RAM and if possible a large and fast ssd for L2ARC. Someone on this list was asking about estimates on ram/cache needs based on blocksizes / fs size / estimated dedup ratio. Either I missed the answer or there was no really simple answer (other than more is better, which always stays true for ram and l2arc). Anyway, we tested it and were surprised about the quantity of reads that ensue. Arnaud _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Brian, If you are considering testing dedup, particularly on large datasets, see the list of known issues, here: http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup Start with build 132. Thanks, Cindy On 02/04/10 16:19, Brian wrote:> I am Starting to put together a home NAS server that will have the following roles: > > (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 HD streams at a time. These will be streamed live to the NAS box during recording. > (2) Playback TV (could be stream being recorded, could be others) to 3 or more extenders > (3) Hold a music repository > (4) Hold backups from windows machines, mac (time machine), linux. > (5) Be an iSCSI target for several different Virtual Boxes. > > Function 4 will use compression and deduplication. > Function 5 will use deduplication. > > I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 mirrored boot drives. > > I have been reading these forums off and on for about 6 months trying to figure out how to best piece together this system. > > I am first trying to select the CPU. I am leaning towards AMD because of ECC support and power consumption. > > For items such as de-dupliciation, compression, checksums etc. Is it better to get a faster clock speed or should I consider more cores? I know certain functions such as compression may run on multiple cores. > > I have so far narrowed it down to: > > AMD Phenom II X2 550 Black Edition Callisto 3.1GHz > and > AMD Phenom X4 9150e Agena 1.8GHz Socket AM2+ 65W Quad-Core > > As they are roughly the same price.
It sounds like the consensus is more cores over clock speed. Surprising to me since the difference in clocks speed was over 1Ghz. So, I will go with a quad core. I was leaning towards 4GB of ram - which hopefully should be enough for dedup as I am only planning on dedupping my smaller file systems (backups and VMs). Was my raidz2 performance comment above correct? That the write speed is that of the slowest disk? That is what I believe I have read. Now on to the hard part of picking a motherboard that is supported and has enough SATA ports! -- This message posted from opensolaris.org
On Thu, Feb 4, 2010 at 7:54 PM, Brian <broconne at vt.edu> wrote:> It sounds like the consensus is more cores over clock speed. Surprising to > me since the difference in clocks speed was over 1Ghz. So, I will go with a > quad core. >Four cores @ 1.8Ghz = 7.2Ghz of threaded performance ([Open]Solaris is relatively decent in terms of threading). Two cores @ 3.1Ghz = 6.2Ghz :) Although you may find single threaded operations slower, as someone pointed out, but even those might wash out as sometimes its I/O that''s the problem. I was leaning towards 4GB of ram - which hopefully should be enough for> dedup as I am only planning on dedupping my smaller file systems (backups > and VMs) >4GB is a good start.> Was my raidz2 performance comment above correct? That the write speed is > that of the slowest disk? That is what I believe I have read. >You are sort-of-correct that its the write speed of the slowest disk. Mirrored drives will be faster, especially for random I/O. But you sacrifice storage for that performance boost. That said, I have a similar setup as far as number of spindles and can push 200MB/sec+ through it and saturate GigE for iSCSI so maybe I''m being harsh on raidz2 :)> Now on to the hard part of picking a motherboard that is supported and has > enough SATA ports! >I used an ASUS board (M4A785-M) which has six (6) SATA2 ports onboard and pretty decent Hypertransport throughput. Hope that helps. -marc -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100204/0b779c60/attachment.html>
> I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 > mirrored boot drives.You want to use compression and deduplication and raidz2. I hope you didn''t want to get any performance out of this system, because all of those are compute or IO intensive. FWIW ... 5 disks in raidz2 will have capacity of 3 disks. But if you bought 6 disks in mirrored configuration, you have a small extra cost, and much better performance.
Interesting comments.. But I am confused. Performance for my backups (compression/deduplication) would most likely not be #1 priority. I want my VMs to run fast - so is it deduplication that really slows things down? Are you saying raidz2 would overwhelm current I/O controllers to where I could not saturate 1 GB network link? Is the CPU I am looking at not capable of doing dedup and compression? Or are no CPUs capable of doing that currently? If I only enable it for the backup filesystem will all my filesystems suffer performance wise? Where are the bottlenecks in a raidz2 system that I will only access over a single gigabit link? Are the insurmountable?> > I plan to start with 5 1.5 TB drives in a raidz2 > configuration and 2 > > mirrored boot drives. > > You want to use compression and deduplication and > raidz2. I hope you didn''t > want to get any performance out of this system, > because all of those are > compute or IO intensive. > > FWIW ... 5 disks in raidz2 will have capacity of 3 > disks. But if you bought > 6 disks in mirrored configuration, you have a small > extra cost, and much > better performance. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss-- This message posted from opensolaris.org
On Thu, 4 Feb 2010, Brian wrote:> Was my raidz2 performance comment above correct? That the write > speed is that of the slowest disk? That is what I believe I have > read.Data in raidz2 is striped so that it is split across multiple disks. In this (sequential) sense it is faster than a single disk. For random access, the stripe performance can not be faster than the slowest disk though. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> Data in raidz2 is striped so that it is split across multiple disks.Partial truth. Yes, the data is on more than one disk, but it''s a parity hash, requiring computation overhead and a write operation on each and every disk. It''s not simply striped. Whenever you read or write, you need to access all the disks (or a bunch of ''em) and use compute cycles to generate the actual data stream. I don''t know enough about the underlying methods of calculating and distributing everything to say intelligently *why*, but I know this:> In this (sequential) sense it is faster than a single disk.Whenever I benchmark raid5 versus a mirror, the mirror is always faster. Noticeably and measurably faster, as in 50% to 4x faster. (50% for a single disk mirror versus a 6-disk raid5, and 4x faster for a stripe of mirrors, 6 disks with the capacity of 3, versus a 6-disk raid5.) Granted, I''m talking about raid5 and not raidz. There is possibly a difference there, but I don''t think so.
> I want my VMs to run fast - so is it deduplication that really slows > things down? > > Are you saying raidz2 would overwhelm current I/O controllers to where > I could not saturate 1 GB network link? > > Is the CPU I am looking at not capable of doing dedup and compression? > Or are no CPUs capable of doing that currently? If I only enable it > for the backup filesystem will all my filesystems suffer performance > wise? > > Where are the bottlenecks in a raidz2 system that I will only access > over a single gigabit link? Are the insurmountable?I''m not sure if anybody can answer your questions. I will suggest you just try things out, and see for yourself. Everybody would have different techniques to tweak performance... If you want to use fast compression and dedup, lots of cpu and ram. (You said 4G, but I don''t think that''s a lot. I never buy a laptop with less than 4G nowadays. I think a lot of ram is 16G and higher.) As for raidz2, and Ethernet ... I don''t know. If you''ve got 5 disks in a raidz2 configuration ... Assuming each disk can sustain 500Mbits, then theoretically these disks might be able to achieve 1.5Gbit or 2.5Gbit with perfect efficiency ... So maybe they can max out your Ethernet. I don''t know. But I do know, if you had a stripe of 3 mirrors, they would have absolutely no trouble maxing out the Ethernet. Even a single mirror could just barely do that. For 2 or more mirrors, it''s cake.
Brian wrote:> Interesting comments.. > > But I am confused. > > Performance for my backups (compression/deduplication) would most likely not be #1 priority. > > I want my VMs to run fast - so is it deduplication that really slows things down? >Dedup requires a fair amount of CPU, but it really wants a big L2ARC and RAM. I''d seriously consider no less than 8GB of RAM, and look at getting a smaller-sized (~40GB) SSD, something on the order of an Intel X25-M. Also, iSCSI-served VMs tend to do mostly random I/O, which is better handled by a striped mirror than RaidZ.> Are you saying raidz2 would overwhelm current I/O controllers to where I could not saturate 1 GB network link? >No.> Is the CPU I am looking at not capable of doing dedup and compression? Or are no CPUs capable of doing that currently? If I only enable it for the backup filesystem will all my filesystems suffer performance wise? >All the CPUs you indicate can handle the job, it''s a matter of getting enough data to them.> Where are the bottlenecks in a raidz2 system that I will only access over a single gigabit link? Are the insurmountable? >RaidZ is good for streaming writes of large size, where you should get performance roughly equal to the number of data drives. Likewise, for streaming reads. Small writes generally limit performance to a level of about 1 disk, regardless of the number of data drives in the RaidZ. Small reads are in-between in terms of performance. Personally, I''d look into having 2 different zpools - a striped mirror for your iSCSI-shared VMs, and a raidz2 for your main storage. In any case, for dedup, you really should have an SSD for L2ARC, if at all possible. Being able to store all the metadata for the entire zpool in the L2ARC really, really helps speed up dedup. Also, about your CPU choices, look here for a good summary of the current AMD processor features: http://en.wikipedia.org/wiki/List_of_AMD_Phenom_microprocessors (this covers the Phenom, Phenom II, and Athlon II families). The main difference between the various models comes down to amount of L3 cache, and HT speed. I''d be interested in doing some benchmarking to see exactly how the variations make a difference. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/05/2010 03:21 AM, Edward Ned Harvey wrote:> FWIW ... 5 disks in raidz2 will have capacity of 3 disks. But if you bought > 6 disks in mirrored configuration, you have a small extra cost, and much > better performance.But the raidz2 can survive the lost of ANY two disk, while the 6 disk mirror configuration will be destroyed if the two disks lost are from the SAME pair. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBS2ukAZlgi5GaxT1NAQKD6wQAjI7zTFGmsHKtrhfSGS65edDecxwG8MSV rDsxoDD0OFs5A1rAJBKZ0UWcRrrDt8iTUKyM0W13+3D2S3i6pxaMLU5jCLFEIPJ7 ZukQxUQ3eRLksXNCjsc7IlIyoe3GTwNclV8pymYCkHp+jggHASRyRtVnninDDX+g zs1X2Rd4qwU=qzs+ -----END PGP SIGNATURE-----
> I am leaning towards AMD because of ECC supportwell, lets look at Intel''s offerings... Ram is faster than AMD''s at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC http://www.newegg.com/Product/Product.aspx?Item=N82E16820139040 This MB has two Intel ethernets and for an extra $30 an ether KVM (LOM) http://www.newegg.com/Product/Product.aspx?Item=N82E16813182212 One needs a Xeon 34xx for ECC, the 45W versions isn''t on newegg, and ignoring the one without Hyper-Threading leaves us http://www.newegg.com/Product/Product.aspx?Item=N82E16819117225 Yea @ 95W it isn''t exactly low power, but 4 cores @ 2533MHz and another 4 Hyper-Thread cores is nice.. If you only need one core, the marketing paperwork claims it will push to 2.93GHz too. But the ram bandwidth is the big win for Intel. Avoid the temptation, but @ 2.8Ghz without ECC, this close $$ http://www.newegg.com/Product/Product.aspx?Item=N82E16819115214 Now, this gets one to 8G ECC easily...AMD''s unfair advantage is all those ram slots on their multi-die MBs... A slow AMD cpu with 64G ram might be better depending on your working set / dedup requirements. Rob
>> Was my raidz2 performance comment above correct? >> ?That the write speed is that of the slowest disk? >> ?That is what I believe I have >> read.> You are > sort-of-correct that its the write speed of the > slowest disk.My experience is not in line with that statement. RAIDZ will write a complete stripe plus parity (RAIDZ2 -> two parities, etc.). The write speed of the entire stripe will be brought down to that of the slowest disk, but only for its portion of the stripe. In the case of a 5 spindle RAIDZ2, 1/3 of the stripe will be written to each of three disks and parity info on the other two disks. The throughput would be 3x the slowest disk for read or write.> Mirrored drives will be faster, especially for > random I/O. But you sacrifice storage for that > performance boost.Is that really true? Even after glancing at the code, I don''t know if zfs overlaps mirror reads across devices. Watching my rpool mirror leads me to believe that it does not. If true, then mirror reads would be no faster than a single disk. Mirror writes are no faster than the slowest disk. As a somewhat related rant, there seems to be confusion about mirror IOPS vs. RAIDZ[123] IOPS. Assuming mirror reads are not overlapped, then a mirror vdev will read and write at roughly the same throughput and IOPS as a single disk (ignoring bus and cpu constraints). Also ignoring bus and cpu constraints, a RAIDZ[123] vdev will read and write at roughly the same throughput of a single disk, multiplied by the number of data drives: three in the config being discussed. Also, a RAIDZ[123] vdev will have IOPS performance similar to that of a single disk. A stack of mirror vdevs will, of course, perform much better than a single mirror vdev in terms of throughput and IOPS. A stack of RAIDZ[123] vdevs will also perform much better than a single RAIDZ[123] vdev in terms of throughput and IOPS. RAIDZ tends to have more CPU overhead and provides more flexibility in choosing the optimal data to redundancy ratio. Many read IOPS problems can be mitigated by L2ARC, even a set of small, fast disk drives. Many write IOPS problems can be mitigated by ZIL. My anecdotal conclusions backed by zero science, Marty -- This message posted from opensolaris.org
On 05/02/2010 04:11, Edward Ned Harvey wrote:>> Data in raidz2 is striped so that it is split across multiple disks. >> > Partial truth. > Yes, the data is on more than one disk, but it''s a parity hash, requiring > computation overhead and a write operation on each and every disk. It''s not > simply striped. Whenever you read or write, you need to access all the > disks (or a bunch of ''em) and use compute cycles to generate the actual data > stream. I don''t know enough about the underlying methods of calculating and > distributing everything to say intelligently *why*, but I know this: > >Well, that''s not entirely true. When reading from raidz2 (non-degraded) you don''t need to re-compute any hashes except for a standard fs block checksum which zfs checks regardless of underlying redundancy.>> In this (sequential) sense it is faster than a single disk. >> > Whenever I benchmark raid5 versus a mirror, the mirror is always faster. > Noticeably and measurably faster, as in 50% to 4x faster. (50% for a single > disk mirror versus a 6-disk raid5, and 4x faster for a stripe of mirrors, 6 > disks with the capacity of 3, versus a 6-disk raid5.) Granted, I''m talking > about raid5 and not raidz. There is possibly a difference there, but I > don''t think so. > >Actually, there is. One difference is that when writing to a raid-z{1|2} pool compared to raid-10 pool you should get better throughput if at least 4 drives are used. Basically it is due to the fact that in RAID-10 the maximum you can get in terms of write throughput is a total aggregated throughput of half the number of used disks and only assuming there are no other bottlenecks between the OS and disks especially as you need to take into account that you are double the bandwidth requirements due to mirroring. In case of RAID-Zn you have some extra overhead for writing additional checksum but other than that you should get a write throughput closer to of T-N (where N is a RAID-Z level) instead of T/2 in RAID-10. See http://milek.blogspot.com/2006/04/software-raid-5-faster-tha_114588672235104990.html -- Robert Milkowski http://milek.blogspot.com
On Fri, 5 Feb 2010, Rob Logan wrote:> > well, lets look at Intel''s offerings... Ram is faster than AMD''s > at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECCIntel''s RAM is faster because it needs to be. It is wise to see the role that architecture plays in total performance.> Now, this gets one to 8G ECC easily...AMD''s unfair advantage is all those > ram slots on their multi-die MBs... A slow AMD cpu with 64G ram > might be better depending on your working set / dedup requirements.With the AMD CPU, the memory will run cooler and be cheaper. Regardless, for zfs, memory is more important than raw CPU performance. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> if zfs overlaps mirror reads across devices.it does... I have one very old disk in this mirror and when I attach another element one can see more reads going to the faster disks... this past isn''t right after the attach but since the reboot, but one can still see the reads are load balanced depending on the response of elements in the vdev. 13 % zpool iostat -v capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 7.01G 142G 0 0 1.60K 1.44K mirror 7.01G 142G 0 0 1.60K 1.44K c9t1d0s0 - - 0 0 674 1.46K c9t2d0s0 - - 0 0 687 1.46K c9t3d0s0 - - 0 0 720 1.46K c9t4d0s0 - - 0 0 750 1.46K but I also support your conclusions. Rob
>>>>> "b" == Brian <broconne at vt.edu> writes:b> (4) Hold backups from windows machines, mac (time machine), b> linux. for time machine you will probably find yourself using COMSTAR and the GlobalSAN iSCSI initiator because Time Machine does not seem willing to work over NFS. Otherwise, for Macs you should definitely use NFS, and you should definitely use the automounter, and you should use it with the ''net'' option (let Mac OS pick were tou mount the fs) if you have heirarchical mounts. Anyway for time machine you cannot use NFS. I''m using: * snv_130 * globalSAN_4.0.0.197_BETA-20091110 * Mac OS X 10.5.<latest> and it seems to basically work for the last ~1month. I''ve no reason to believe these versions are special but suggest you get the BETA globalsan and not the stable one. for linux, if you mount Linux NFS filesystems from Solaris you need to use ''-o sec=sys'' to avoid everything showing up as guest, due to a weird corner case that I think eventually got fixed on one side or the other but probably hasn''t percolated through all the stable branches yet. If you mount Solaris NFS filesystems from Linux, you may want to use ''-o noacl'' because Solaris NFS fabricates ACL''s and feeds them to Linux even when you haven''t made any, leading to annoying ''+'' signs in ''ls -l'' and sometimes weird, unnecessary permissions problems. This happens even with NFSv3. :( What''s even stupider, busybox ''mount'' doesn''t seem to support the noacl flag which cost me an extra couple hours getting an NFS-rooted system to boot. I like the idea of smoothly transitioning to a more advanced permissions system, but IMHO the whole mess just goes to show you, let people who''ve been mucking about with Windows touch anything else in your codebase, and their brains are so warped by the influence of that platform on their thinking they make a ponderous mess of it and then chant ``this shouldn''t be happening'''' over and over. b> (5) Be an iSCSI target for several different Virtual Boxes. I''ve been using plain statically-allocated (not dynamic) .VDI''s on ZFS filesystems. I''ve not been using zvol''s nor any iSCSI yet. If you do the latter two suggest comparing performance with the former one---there are rumors of some cache flush knobs may need tuning. Also in general when you yank the cord, the integrity of a physical machine''s filesystems is guaranteed, but the same is *not* true of a virtual machine when its host''s cord is yanked. It''s supposed to be true when you force-virtual-powerdown the guest, but not when you yank the host''s cord, because of the same knobs were twisted to compromise integrity for performance. The compromise is probably the right one provided you can work around it, by for example snapshotting the guest so yuo can roll back if there''s corruption, and keeping oft-changing files that can''t be rolled back outside the guest using either guest serevices shared folders on Windows or NFS on Unix. b> Function 4 will use compression and deduplication. Function 5 b> will use deduplication. I''ve not dared to use dedup yet. In particular the DDT needs to fit in RAM (or maybe L2ARC) to avoid performance degredations so severe you may find yourself painted into a corner (ex., ''zfs delete'' runs>1wk forcing you to give up, ''zfs send'' non-deduped filesystemselsewhere, destroy pool, restore from backup). not sure sddt-vdev is the best idea but that''s discussed here: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566 What''s missing to my view is a way to manage it: if overgrown DDT can, in effect, trash the pool by making maintenance commands take forever, then there''s got to be a way to watch the size of the DDT, maybe even cap it and disable dedup if it overgrows. That said I haven''t tried it so I''m talking out my ass. Also gzip compression does not sound like it works well---suggest lzjb instead---but this might be fixed in 6586537, 6806882, or by this fix which sounds like a fairly big deal: http://arc.opensolaris.org/caselog/PSARC/2009/615/mail so I would say gzip may be worth another try now but definitely be ready to fall back to lzjb and convert with zfs send | zfs recv. anyway...seems many things are really improving drastically since a year ago, and thank god for the list! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100205/e2d49e3e/attachment.bin>
On Fri, Feb 5, 2010 at 12:20 PM, Miles Nordin <carton at ivy.net> wrote:> for time machine you will probably find yourself using COMSTAR and the > GlobalSAN iSCSI initiator because Time Machine does not seem willing > to work over NFS. ?Otherwise, for Macs you should definitely use NFS,Slightly off-topic ... You can make Time Machine work with CIFS or NFS mounts by setting a system preference. The command is: defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1 I''ve had some success trying to get my father-in-law''s system to back up to a drobo with this. It was working last time I was by his house, but I''m not sure if it''s still working. -B -- Brandon High : bhigh at freaks.com
On Feb 5, 2010, at 10:49 AM, Robert Milkowski <milek at task.gda.pl> wrote:> Actually, there is. > One difference is that when writing to a raid-z{1|2} pool compared > to raid-10 pool you should get better throughput if at least 4 > drives are used. Basically it is due to the fact that in RAID-10 the > maximum you can get in terms of write throughput is a total > aggregated throughput of half the number of used disks and only > assuming there are no other bottlenecks between the OS and disks > especially as you need to take into account that you are double the > bandwidth requirements due to mirroring. In case of RAID-Zn you have > some extra overhead for writing additional checksum but other than > that you should get a write throughput closer to of T-N (where N is > a RAID-Z level) instead of T/2 in RAID-10.That hasn''t been my experience with raidz. I get a max read and write IOPS of the slowest drive in the vdev. Which makes sense because each write spans all drives and each read spans all drives (except the parity drives) so they end up having the performance characteristics of a single drive. Now if you have enough drives you can create multiple raidz vdevs to get the IOPS up, but you need a lot more drives then what multiple mirror vdevs can provide IOPS wise with the same amount of spindles. -Ross
> Intel''s RAM is faster because it needs to be.I''m confused how AMD''s dual channel, two way interleaved 128-bit DDR2-667 into an on-cpu controller is faster than Intel''s Lynnfield dual channel, Rank and Channel interleaved DDR3-1333 into an on-cpu controller. http://www.anandtech.com/printarticle.aspx?i=3634> With the AMD CPU, the memory will run cooler and be cheaper.cooler yes, but only $2 more per gig for 2x bandwidth? http://www.newegg.com/Product/Product.aspx?Item=N82E16820139050 http://www.newegg.com/Product/Product.aspx?Item=N82E16820134652 and if one uses all 16 slots, that 667Mhz simm runs at 533Mhz with AMD. The same is true for Lynnfield if one uses Registered DDR3, one only gets 800Mhz with all 6 slots. (single or dual rank)> Regardless, for zfs, memory is more important than raw CPUagreed! but everything must be balanced. Rob
On 06/02/2010 02:38, Ross Walker wrote:> On Feb 5, 2010, at 10:49 AM, Robert Milkowski <milek at task.gda.pl> wrote: > >> Actually, there is. >> One difference is that when writing to a raid-z{1|2} pool compared to >> raid-10 pool you should get better throughput if at least 4 drives >> are used. Basically it is due to the fact that in RAID-10 the maximum >> you can get in terms of write throughput is a total aggregated >> throughput of half the number of used disks and only assuming there >> are no other bottlenecks between the OS and disks especially as you >> need to take into account that you are double the bandwidth >> requirements due to mirroring. In case of RAID-Zn you have some extra >> overhead for writing additional checksum but other than that you >> should get a write throughput closer to of T-N (where N is a RAID-Z >> level) instead of T/2 in RAID-10. > > That hasn''t been my experience with raidz. I get a max read and write > IOPS of the slowest drive in the vdev. > > Which makes sense because each write spans all drives and each read > spans all drives (except the parity drives) so they end up having the > performance characteristics of a single drive. > >Please note that I was writing about write *throughput* in terms of MB/s instead of IOPS. But even in terms of write IOPS RAID-Z can be faster than RAID-10 assuming asynchronous I/O is issued and there is enough memory to buffer them for up-to 30s - well if it is the case from app point of view it will be as fast as writing to memory in both raid-z and raid-10 cases but because zfs will aggregate writes and make them basically a sequential writes raid-z could provide more throughput than raid-10 when you need to write your data twice, so raid-z could be able to more quickly flush transactions to disks. See http://milek.blogspot.com/2006/04/software-raid-5-faster-tha_114588672235104990.html -- Robert Milkowski http://milek.blogspot.com
> b> (4) Hold backups from windows machines, mac (time machine), > b> linux. > > for time machine you will probably find yourself using COMSTAR and the > GlobalSAN iSCSI initiator because Time Machine does not seem willing > to work over NFS. Otherwise, for Macs you should definitely use NFS, > and you should definitely use the automounter, and you should use it > with the ''net'' option (let Mac OS pick were tou mount the fs) if you > have heirarchical mounts.A few comments here ... True time machine won''t work across NFS, at least, not unless you enable the "unsupported" bit, but depending on the server OS ... FreeNAS for example ... uses ZFS as the underlying filesystem, and easily allows you to create AFP shares which support time machine. As for mac access via nfs, automounter, etc ... I found that the UID/GID / posix permission bits were a problem, and I found it was easier and more reliable for the macs to use SMB instead of NFS. At least in my environment.
On Fri, 5 Feb 2010, Rob Logan wrote:>> Intel''s RAM is faster because it needs to be. > I''m confused how AMD''s dual channel, two way interleaved > 128-bit DDR2-667 into an on-cpu controller is faster than > Intel''s Lynnfield dual channel, Rank and Channel interleaved > DDR3-1333 into an on-cpu controller. > http://www.anandtech.com/printarticle.aspx?i=3634I see that you are reading a game computing web site. It is for people who want to build PCs to run video games under Windows. The most useful thing I see in the referenced article is that these new Intel Core i7 CPUs are able to idle at much lower power levels, which seems quite useful for a home NAS server. Otherwise I don''t see much which indicates what the performance would be with Solaris/zfs in a storage-setup. The main focus should be on how much ECC RAM you can stuff into the motherboard and how much it costs. After that comes multi-threaded memory I/O performance and power consumption. Raw CPU computational performance should be way down in the priority level. Even a fairly slow CPU should be able to saturate gigabit ethernet. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Fri, 5 Feb 2010, Rob Logan wrote: > >>> Intel''s RAM is faster because it needs to be. >> I''m confused how AMD''s dual channel, two way interleaved >> 128-bit DDR2-667 into an on-cpu controller is faster than >> Intel''s Lynnfield dual channel, Rank and Channel interleaved >> DDR3-1333 into an on-cpu controller. >> http://www.anandtech.com/printarticle.aspx?i=3634 > > I see that you are reading a game computing web site. It is for > people who want to build PCs to run video games under Windows. The > most useful thing I see in the referenced article is that these new > Intel Core i7 CPUs are able to idle at much lower power levels, which > seems quite useful for a home NAS server. Otherwise I don''t see much > which indicates what the performance would be with Solaris/zfs in a > storage-setup. > > The main focus should be on how much ECC RAM you can stuff into the > motherboard and how much it costs. After that comes multi-threaded > memory I/O performance and power consumption. Raw CPU computational > performance should be way down in the priority level. Even a fairly > slow CPU should be able to saturate gigabit ethernet. > > BobI would second Bob''s recommendations. For a storage box, the primary things of important are having enough ECC RAM to cache everything. A big L2ARC SSD seems to be equally important for those using dedup regularly. Also, be /very/ careful with buying non-Xeon Intel CPUs. With anything prior to the Nehalem architecture, the memory controller was on the motherboard, and you specifically have to get a motherboard which supports ECC Ram. For the Nehalem and later architectures (Core i3, i5, i7), with the memory controller on the CPU, only SOME of them support ECC RAM. I /strongly/ suggest looking at the CPU specs from Intel first, when getting any non-Xeon CPU or motherboard: http://ark.intel.com/Default.aspx AMD, of course, does not have this problem. ALL x64 AMD CPUs sold these days support ECC. And, it seems that finding an AMD motherboard which has Frankly, I suspect that a small storage box pumping data out a single 1Gbit ethernet interfaces really doesn''t stress a CPU that much, in the big scheme of things. I like the original Phenom X3 or X4 as a good compromise between modest L2 cache, modest power draw, good multi-core, and really cheap price. If you really want something hard-core, I''d step over into the older AMD Barcelona-based Opterons. They''re equivalent to the Phenom, plus their motherboards come with just stupid numbers of DIMM slots. :-) -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
> I like the original Phenom X3 or X4we all agree ram is the key to happiness. The debate is what offers the most ECC ram for the least $. I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333 like Lynnfield. To use Intel''s 6 slots vs AMD 4 slots, one must use Registered ECC. So the low cost mission is something like AMD Phenom II X4 955 Black Edition Deneb 3.2GHz Socket AM3 125W $150 http://www.newegg.com/Product/Product.aspx?Item=N82E16819103808 $ 85 http://www.newegg.com/Product/Product.aspx?Item=N82E16813131609 $ 60 http://www.newegg.com/Product/Product.aspx?Item=N82E16820139050 But we are still stuck at 8G without going to expensive ram or a more expensive CPU. Rob
Rob Logan wrote:>> I like the original Phenom X3 or X4 >> > > we all agree ram is the key to happiness. The debate is what offers the most ECC > ram for the least $. I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333 > like Lynnfield. To use Intel''s 6 slots vs AMD 4 slots, one must use Registered ECC. > So the low cost mission is something like > > AMD Phenom II X4 955 Black Edition Deneb 3.2GHz Socket AM3 125W > $150 http://www.newegg.com/Product/Product.aspx?Item=N82E16819103808 > $ 85 http://www.newegg.com/Product/Product.aspx?Item=N82E16813131609 > $ 60 http://www.newegg.com/Product/Product.aspx?Item=N82E16820139050 > > But we are still stuck at 8G without going to expensive ram or > a more expensive CPUSocket AM2/AM2+ supports a maximum of 4 DIMM sockets (2 dual banks). I /think/ AM3 has the same limitation, but I can''t verify that. As for Intel, I can only find a single 6-DIMM motherboard for LGA1156 (the i3/i5/i7 socket, not the LGA1366 i7-only socket). It''s $250 (http://www.newegg.com/Product/Product.aspx?Item=N82E16813128410) Frankly, for more than 8GB, your best bet is to pick up an EOL''d motherboard - a dual Socket F board which supports an AMD Barcelona-era Opteron is probably the best buy - $250 for both, give or take. And, Registered ECC DDR2-667 runs less than $50 per 2GB stick. ---- In reality, what I''ve found is often the best bet is to get an older IBM system from eBay. There are plenty of them around, they''re pretty cheap, and parts are relatively inexpensive. For instance, I just got a (new, still under warranty) IBM x3500 for about $500 - it''s a tower case. The 2U rackmount IBM x3655 is also a good fit here. The sole drawback of these things is that they aren''t exactly built to be super-low power. Oh well. But they''ve got all sorts of nice bells and whistles. :-) (good news for me: I got fully-tricked out x3500 with 2 dual Xeon 5140, 8GB of RAM, 4x73 SAS and 4x750 SATA drives for under $1k. AND a battery-backed raid controller. AND a real Service Processor. AND that includes the 3-year IBM warranty. And, it runs OpenSolaris 2009.06 with no tricks require - simply boot, install, and it''s all set.) -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
On Feb 6, 2010, at 21:08, Rob Logan wrote:> I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333 > like Lynnfield. To use Intel''s 6 slots vs AMD 4 slots, one must use > Registered ECC.What is the difference between "unbuffered" and "registered"?
David Magda wrote:> On Feb 6, 2010, at 21:08, Rob Logan wrote: > >> I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333 >> like Lynnfield. To use Intel''s 6 slots vs AMD 4 slots, one must use >> Registered ECC. > > What is the difference between "unbuffered" and "registered"? >Buffered is often used as a synonym for Registered memory. The addition of a register (buffer) between the memory controller and the memory chips reduces the bus load. This increases the number of modules that can be driven. This is why many systems specify unbuffered and (significantly higher) registered memory capacities. -- Ian.
Ian Collins wrote:> David Magda wrote: >> On Feb 6, 2010, at 21:08, Rob Logan wrote: >> >>> I failed to realize the AM3 cpus accepted UnBuffered ECC DDR3-1333 >>> like Lynnfield. To use Intel''s 6 slots vs AMD 4 slots, one must use >>> Registered ECC. >> >> What is the difference between "unbuffered" and "registered"?About $5 ? <baddaboom> Thank you, thank you, I''ll be here all week.> Buffered is often used as a synonym for Registered memory. > The addition of a register (buffer) between the memory controller and > the memory chips reduces the bus load. This increases the number of > modules that can be driven. This is why many systems specify > unbuffered and (significantly higher) registered memory capacities.Be careful. Fully-Buffered and Registered and NOT the same. http://en.wikipedia.org/wiki/Registered_memory It''s (really) hard to design a system to use more than 4 DIMM slots with Unbuffered RAM and still keep everything stable. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
>>>>> "enh" == Edward Ned Harvey <solaris at nedharvey.com> writes:enh> As for mac access via nfs, automounter, etc ... I found that enh> the UID/GID / posix permission bits were a problem, and I enh> found it was easier and more reliable for the macs to use SMB I found it much less reliable, if by reliable you mean not losing data. There''s a questionable GUI feature that throws up a [Disconnect] window whenever a normal unix system would say ''not responding still trying'', but so long as you ignore this window instead of pressing what seems to be the only button, the old Unix feature of ``server can reboot without losing client writes'''' seems to still be there. SMB, not so much. There''s also questions of case sensitivity, locking, being mounted at boot time rather than login time, accomodating more than one user. I''ve also heard SMB is far slower. The Macs I''ve switched to automounted NFS are causing me less trouble. If you are in a ``share almost everything'''' situation, just add umask 000 to /etc/launchd.conf and reboot. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100208/be37efbe/attachment.bin>
Edward Ned Harvey
2010-Feb-08 21:56 UTC
[zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)
> There''s also questions of case sensitivity, locking, being mounted at > boot time rather than login time, accomodating more than one user. > I''ve also heard SMB is far slower. > > The Macs I''ve switched to automounted NFS are causing me less trouble. > > If you are in a ``share almost everything'''' situation, just add > > umask 000 > > to /etc/launchd.conf and reboot.How are you managing UID''s on the NFS server? If user eharvey connects to server from client Mac A, or Mac B, or Windows 1, or Windows 2, or any of the linux machines ... the server has to know it''s eharvey, and assign the correct UID''s etc. When I did this in the past, I maintained a list of users in AD, and duplicate list of users in OD, so the mac clients could resolve names to UID''s via OD. And a third duplicate list in NIS so the linux clients could resolve. It was terrible. You must be doing something better? How do you manage your NFS exports? Do all the clients have static assigned IP''s, or do you simply export to the whole subnet, or do you do something else? I would consider it a security risk, if any schmo could take any unused IP address, connect to the server, and claim to be eharvey without any problem. Also, I had a umask problem, which presumably you''ve got solved by the launchd.conf edit. Presumably this umask applies, whether you create a folder in Finder, or create a file in MS Word, or save a new text file from TextEdit ... The umask is applied to every file and every folder creation, regardless of which app is doing the creation, right?
Edward Ned Harvey
2010-Feb-08 21:58 UTC
[zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)
> There''s also questions of case sensitivity, locking, being mounted at > boot time rather than login time, accomodating more than one user. > I''ve also heard SMB is far slower. > > The Macs I''ve switched to automounted NFS are causing me less trouble. > > If you are in a ``share almost everything'''' situation, just add > > umask 000 > > to /etc/launchd.conf and reboot.How are you managing UID''s on the NFS server? If user eharvey connects to server from client Mac A, or Mac B, or Windows 1, or Windows 2, or any of the linux machines ... the server has to know it''s eharvey, and assign the correct UID''s etc. When I did this in the past, I maintained a list of users in AD, and duplicate list of users in OD, so the mac clients could resolve names to UID''s via OD. And a third duplicate list in NIS so the linux clients could resolve. It was terrible. You must be doing something better? How do you manage your NFS exports? Do all the clients have static assigned IP''s, or do you simply export to the whole subnet, or do you do something else? I would consider it a security risk, if any schmo could take any unused IP address, connect to the server, and claim to be eharvey without any problem. Also, I had a umask problem, which presumably you''ve got solved by the launchd.conf edit. Presumably this umask applies, whether you create a folder in Finder, or create a file in MS Word, or save a new text file from TextEdit ... The umask is applied to every file and every folder creation, regardless of which app is doing the creation, right?
>>>>> "enh" == Edward Ned Harvey <macenterprise at nedharvey.com> writes:enh> How are you managing UID''s on the NFS server? All the macs are installed from the same image using asr. And for the most part, there''s just one user, except where there isn''t, and then I manage uid''s by hand. enh> When I did this in the past, I maintained a list of users in enh> AD, and duplicate list of users in OD, so the mac clients enh> could resolve names to UID''s via OD. And a third duplicate enh> list in NIS so the linux clients could resolve. It was enh> terrible. Why is that terrible? Is it impossible to automate because of the AD piece? OD/NIS should be dumpable from SQL easily, right? If AD is the unscriptable piece, it just seems kind of sad to throw the whole thing out and standardize on the one piece that''s the most convoluted and brittle and least automatable, instead of the other way around. enh> How do you manage your NFS exports? [...] export to the whole enh> subnet yeah, that. rw=@1.2.3.0/24 there is a highly stupid bug that would crash mountd for NFSv4 or get incorrect refusal for NFSv3 if the IP was not lookupable in reverse DNS or /etc/hosts. but it may be fixed now because someone from nfs-discuss was unable to reproduce. enh> I would consider it a security risk, if any schmo could take enh> any unused IP address, connect to the server, and claim to be enh> eharvey yeah there is zero security, none at all. I don''t really think adding exports restrictions at a finer granularity than subnet would help much. Only Kerberos would help. but most of the security we care about comes from taking snapshots: that''s the attack that''s relevant here, disgruntled or confused employees deleting everything. This is a robust kind of security, not M&M model. also every desktop has a read-only copy of yesterday''s shared filesystem, from another nfs server populated with rsync, pre-mounted, in case of problems with the writeable one. At least it is not crap security like SMB, with five or ten wildly different variants and password formats operating on different ports some with MAC session-binding some without. I admit SMB has some security rather than none, but it''s a slow crashy clumsy caveat-laden protocol. You might also look at it this way: if there''s going to be a panic/DoS or exploitable buffer overflow security problem, it''s far more likely to be in the SMB stack than the NFS stack. (that said, ''mknod <file> b 14 <n>'' seems to panic a Solaris NFS server, at least b71.) enh> solved by the launchd.conf edit. Presumably this umask enh> applies, whether you create a folder in Finder, or create a enh> file in MS Word, or save a new text file from TextEdit ... The enh> umask is applied to every file and every folder creation, enh> regardless of which app is doing the creation, right? right. This much works perfectly AFAICT. I suppose if you have a user database and want private user folders, you just make them owned by that user and chmod 700. At least that much works everywhere and survives backup, unlike this complete disaster that is ACL''s. I get it, the NFSv3 featureset with no text usernames and no Kerberos unchanged in two decades is not a reasonable answer to modern expectations, and NIS is no longer the unifying directory service it once was now that Mac is a credible client. AD can go fuck itself: buy a windows server and another sysadmin to manage it, or suffer the polluting effect it has on your mind and your entire operation. but, yeah, NFSv3 is not enough. It''s zero-security simplicity turns out to be exactly what we need here though, and the Mac client with automounter 10.5 or later, is extremely solid, more than the other Mac filesystems or GlobalSAN. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100208/cd9c39de/attachment.bin>
Ross Walker
2010-Feb-09 13:35 UTC
[zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)
On Feb 8, 2010, at 4:58 PM, Edward Ned Harvey <macenterprise at nedharvey.com > wrote:> How are you managing UID''s on the NFS server? If user eharvey > connects to > server from client Mac A, or Mac B, or Windows 1, or Windows 2, or > any of > the linux machines ... the server has to know it''s eharvey, and > assign the > correct UID''s etc. When I did this in the past, I maintained a list > of > users in AD, and duplicate list of users in OD, so the mac clients > could > resolve names to UID''s via OD. And a third duplicate list in NIS so > the > linux clients could resolve. It was terrible. You must be doing > something > better?The way I did this type of integration in my environment was to setup a Linux box with winbind and have NIS make maps just pull out the UID ranges I wanted shared over NIS with all passwords blanked out. Then all -nix based systems use NIS+Kerberos. I suppose one could do the same with LDAP, but winbind has the advantage of auto-creating UIDs based on the user''s RID+mapping range which saves A LOT of work in creating UIDs in AD. -Ross