Alan McKay
2009-Oct-20 20:28 UTC
[CentOS] openfiler (was: using CentOS as an iSCSI server?)
> Simple, it's only a NAS device, and not really a file server / web > server / data base server as well.Here is something I am currently lokoing at, and wondering if you'd considered it or if anyone here has done it. I've got a bunch of existing hardware - really good IBM stuff that is all installed with CentOS (with a few exceptions). We want to move to virtualization, but the direct-attached storage is a bit of an issue because with VMWare ESXi you cannot do some of the fancy stuff like "VMove" (moving a live running server from one host to another). So of course Openfiler comes to mine. However, most of this hardware has pretty significant CPU and RAM horsepower, so just running it as openfiler would be quite wasteful. What I se some people doing is - install ESXi (4.0) onto the bare metal - install openfiler as a virtual machine - give open filer all the disk - serve the disk out to other VMs via openfiler This may seem redundant vs just doing it without openfiler, but as mentioned a lot of the fancy features you only get with virtualized disk. I am about to do some benchmarks on this stuff to see what percentage of performance I give up for doing it this way. -- ?Don't eat anything you've ever seen advertised on TV? - Michael Pollan, author of "In Defense of Food"
Alan McKay wrote:> This may seem redundant vs just doing it without openfiler, but as > mentioned a lot of the fancy features you only get with virtualized > disk.Doing that for the most part defeats the purpose of using things like Vmotion in the first place, that is being able to evacuate a system to perform hardware/software maintenance on it. Myself I have 12 vSphere ESXi systems deployed at remote sites using local storage, they run web servers mostly, and are redundant, so I don't need things like vMotion. Local storage certainly does restrict flexibility. Over complicating things is likely to do more harm than good, and you'll likely regret going down that path at some point so save your self some trouble and don't try. Get a real storage system or build a real storage system to do that kind of thing. Cheap ones include(won't vouch for any of them personally) http://www.infortrend.com/ http://h18006.www1.hp.com/storage/disk_storage/msa_diskarrays/index.html http://www.xyratex.com/Products/storage-systems/raid.aspx Or build/buy a system to run openfiler. At my last company I had a quad proc system running a few HP MSA shelves that ran openfiler. Though the software upgrade process for openfiler was so scary I never upgraded it. And more than one kernel at the time paniced at boot. I'm sure it's improved since then (2 years ago). nate
John R Pierce
2009-Oct-20 20:43 UTC
[CentOS] openfiler (was: using CentOS as an iSCSI server?)
absolutely CRITICAL to any SAN implementations is that the storage controller (iscsi target, be it openfiler or what) remain 100% rock solid tsable at all times. you can NOT REboot a shared storage controller without shutting all client systems down first (or at least unmounting all SAN volumes) its non-trivial to implement a high availability (active/standby) storage controller with iscsi. very hard, in fact. commercial SANs are fully redundant, with redundant fiberchannel cards on each client and storage controller, redundant fiberchannel switches, redundant paths from the storage controllers to the actual drive arrays, etc. many of them shadow the writeback cache storage so if one controller fails the other one has any write cached blocks and can post them to the disk spindles transparently to maintain complete data coherency. Trying to achieve this level of 0.99999 uptime/reliability with commodity hardware and software is not easy.