Hey, http://zfsonlinux.org/epel.html If you have a little time and resource please install and report back any problems you see. A filesystem or Volume sits within a zpool a zpool is made up of vdevs vdevs are made up of block devices. zpool is similar to LVM volume vdev is similar to raid set devices can be files. Thanks, Andrew
Andrew, We've been testing ZFS since about 10/24, see my original post (and replies) asking about its suitability "ZFS on Linux in production" on this list. So far, it's been rather impressive. Enabling compression better than halved the disk space utilization in a low/medium bandwidth (mainly archival) usage case. Dealing with many TB of data in a "real" environment is a very slow, conservative process; our ZFS implementation has, so far, been limited to a single redundant copy of a file system on a server that only backs up other servers. Our next big test is to try out ZFS filesystem send/receive in lieu of our current backup processes based on rsync. Rsync is a fabulous tool, but is beginning to show performance/scalability issues dealing with the many millions of files being backed up, and we're hoping that ZFS filesystem replication solves this. This stage of deployment is due to be in place by about 1/2014. -Ben On 11/30/2013 06:20 AM, Andrew Holway wrote:> Hey, > > http://zfsonlinux.org/epel.html > > If you have a little time and resource please install and report back > any problems you see. > > A filesystem or Volume sits within a zpool > a zpool is made up of vdevs > vdevs are made up of block devices. > > zpool is similar to LVM volume > vdev is similar to raid set > > devices can be files. > > Thanks, > > Andrew > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >
On Sat, Nov 30, 2013 at 9:20 AM, Andrew Holway <andrew.holway at gmail.com>wrote:> Hey, > > http://zfsonlinux.org/epel.html > > If you have a little time and resource please install and report back > any problems you see. > > A filesystem or Volume sits within a zpool > a zpool is made up of vdevs > vdevs are made up of block devices. > > zpool is similar to LVM volume > vdev is similar to raid set > > devices can be files. > > Thanks, > > Andrew >Andrew, I've been using ZFS 0.6.1 on CentOS 6.4 for the past 6 months. For the past few years before I was using mdam with ext4 on CentOS 5. The main reason for upgrading was snapshots integrated with Samba for file shares and compression. So far so good. Ryan
> On 04.12.2013 14:05, John Doe wrote: >> >From: Lists<lists at benjamindsmith.com> >> > >>> >>Our next big test is to try out ZFS filesystem send/receive in lieu >>> >>of >>> >>our current backup processes based on rsync. Rsync is a fabulous >>> >>tool, >>> >>but is beginning to show performance/scalability issues dealing with >>> >>the >>> >>many millions of files being backed up, and we're hoping that ZFS >>> >>filesystem replication solves this. >> > >> >Not sure if I already mentioned it but maybe have a look at: >> >?http://code.google.com/p/lsyncd/ > I'm not so sure inotify works well with millions of files, not to > mention it uses rsync. :D > > -- Sent from the Delta quadrant using Borg technology! Nux!I can attest to the usefulness of 'lsyncd' for large numbers of files (our file server has almost 2 million in active use, with a second backup server that's lsync'd to the first. Things to note: - Yes, lsyncd does use rsync, but it issues an 'exclude *' followed by the list of only the file(s) that need updating at that moment. - The inotify service can be jacked waaaay up (three kernel parameters) to handle millions of files if you wish. Just make sure you have lots of RAM. It's wise to tune ZFS to *not* use all available RAM. - Updating is very quick and has never failed. Regarding ZFS, our two ZFS-on-Linux servers have been in full production for several months, with zero problems so far. Updates to the latest version is quite painless. Today I had to replace a failed 4TB drive ... it took just a few minutes and required only two commands to do the replacement and start the resilvering process. This was done while the server was in active use, with only a small performance hit. Sweet! Chuck
On 04.12.2013 14:05, nux at li.nux.ro wrote:>>> >>On 04.12.2013 14:05, John Doe wrote: >>>>> >>>>From: Lists<lists at benjamindsmith.com> >>>>> >>>> >>>>>>> >>>>>>Our next big test is to try out ZFS filesystem send/receive in >>>>>>> >>>>>>lieu >>>>>>> >>>>>>of >>>>>>> >>>>>>our current backup processes based on rsync. Rsync is a fabulous >>>>>>> >>>>>>tool, >>>>>>> >>>>>>but is beginning to show performance/scalability issues dealing >>>>>>> >>>>>>with >>>>>>> >>>>>>the >>>>>>> >>>>>>many millions of files being backed up, and we're hoping that ZFS >>>>>>> >>>>>>filesystem replication solves this. >>>>> >>>> >>>>> >>>>Not sure if I already mentioned it but maybe have a look at: >>>>> >>>>?http://code.google.com/p/lsyncd/ >>> >>I'm not so sure inotify works well with millions of files, not to >>> >>mention it uses rsync. :D >>> >> >>> >>-- Sent from the Delta quadrant using Borg technology! Nux! >> > >> >I can attest to the usefulness of 'lsyncd' for large numbers of files >> >(our file server has almost 2 million in active use, with a second >> >backup server that's lsync'd to the first. >> > >> >Things to note: >> >- Yes, lsyncd does use rsync, but it issues an 'exclude *' followed by >> >the list of only the file(s) that need updating at that moment. >> > >> >- The inotify service can be jacked waaaay up (three kernel >> >parameters) >> >to handle millions of files if you wish. Just make sure you have lots >> >of RAM. > Be careful with it. Sadly I found out that inotify would consistently > fail on InnoDB files (ibd); I had to use stupid while loops and check > mtimes to perform some stuff that inotify-cron would've done much more > elegantly ... > > -- Sent from the Delta quadrant using Borg technology! Nux!Interesting point, something I didn't know. Fortunately in my case there are no db files involved directly, just db dumps wrapped in a tarball along with other associated stuff, sent from other servers. I would expect that lsync'ing db files could be a nasty non-stop process if the database is constantly being updated, so using db tools for replication would be best, configuring inotify/lsyncd to ignore the db directories. I believe by default that lsyncd instructs rsync to do whole-file transfers, so a large db could be a real problem. Thanks for the important heads-up!
On 11/30/2013 06:20 AM, Andrew Holway wrote:> Hey, > > http://zfsonlinux.org/epel.html > > If you have a little time and resource please install and report back > any problems you see. >Andrew, I want to run /var on zfs, but when I try to move /var over it won't boot thereafter, with errors about /var/log missing. Reading the ubuntu howto for ZFS indicates that while it's possible to even boot from zfs, it's a rather long and complicated process. I don't want to boot from ZFS, but it appears that grub needs to be set up to support ZFS in order to be able to mount zfs filesystems, and it's possible that EL6's grub just isn't new enough. Is there a howto/ instructions for setting up zfs on CentOS/6 so that it's available on boot? Thanks, Ben