All, We are pleased to announce the release of OCFS2 1.4 *BETA*. We are currently targeting production with the SLES10 SP2 and RHEL5 U2 releases, both of which are currently in BETA. OCFS2 1.4 *BETA* has been shipping with SLES10 SP2 BETA for sometime now. This announcement provides packages for the same for the RHEL5 U2 BETA release. While this is a BETA release and it is not advisable to deploy it in a production environment, we would like to encourage you to use it in your test cluster and provide us with constructive feedback. Before deploying, all users are encouraged to read the README in its entirety, especially the sections on the compatibilities and the new filesystem defaults. The packages for OCFS2 1.3.9-0.1 *BETA* for RHEL5 U2 BETA are available here: http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL5_BETA/ The packages for OCFS2-TOOLS 1.3.9-0.1 *BETA* for RHEL5 are available here: http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL5/i386/beta/ http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL5/x86_64/beta/ As always, we look forward to hearing from you on the ocfs2-users@oss.oracle.com mailing list. The OCFS2 Team OCFS2: http://oss.oracle.com/projects/ocfs2 TOOLS: http://oss.oracle.com/projects/ocfs2-tools README =========================================================================== WHAT'S NEW This release makes available the features that have been steadily accumulating in the mainline Linux kernel tree over the past 18 months or so. The list of features added since the 1.2 release is as follows: * File Attribute Support * Directory Readahead * Performance Enhancement - stat(2) * Performance Enhancement - unlink(2) * Splice IO (support to be enabled by release) * Atime/Mtime Updates * Sparse File Support (ondisk change) * Unwritten Extents/Punch Holes (ondisk change, support via ioctl only) * Shared Writeable mmap(2) * Data in Inode (ondisk change, tools support expected by release) * Online Filesystem Resize * Clustered flock(2) * Ordered Journal Mode For the full description, please refer to the following link: http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2-new-features.html#FEATURES FILESYSTEM COMPATIBILITY OCFS2 1.2 is fully ondisk compatible with OCFS2 1.4. Users installing this release will be able to mount their existing volumes as-is. However, one will not be able to do a rolling upgrade to 1.4 as the network protocol has changed. Also, users mounting existing volumes will not have access to all the new features. Features entailing ondisk changes, like Sparse Files, Unwritten Extents and Data-in-Inode, will need to be explicitly enabled using tunefs.ocfs2. Once enabled, the same volumes can still be mounted using the older 1.2 software, but only after the said feature is disabled. TOOLS COMPATIBILITY Simply put, the latest OCFS2-TOOLS always support all existing versions of the file system. While users looking to upgrade to 1.4 (or to the latest Linux kernel), must install OCFS2-TOOLS 1.4, existing 1.2 users can do so at their own convenience. NEW FILESYSTEM DEFAULTS Sparse file support is activated by default for volumes formatted using the new mkfs.ocfs2. Users wishing to retain full compatibility with OCFS2 1.2 should specify "--fs-feature-level=max-compat" during format. The other change in the defaults concerns the journaling mode. While OCFS2 1.2 supported the writeback data journaling mode, 1.4 not only adds support for the ordered data journaling mode, it also makes it the default. Users wishing to keep using the writeback mode can enable it by mounting the volume using the data=writeback mount option. The difference between the two modes is that in the ordered mode, the file system flushes the file data to disk before committing the metadata changes. In writeback, no such write ordering is preserved. While the writeback mode guarantees internal filesystem integrity alongwith better overall throughput than the ordered mode, there exists a possibility of stale data appearing in files after a crash and a journal recovery. DISTRIBUTIONS OCFS2 1.4 is a backport of the filesystem in the mainline Linux kernel tree, 2.6.25-rc6. It has been backported to work only on the 2.6.16 (SLES10) and the 2.6.18 (RHEL5) kernels. No attempt has been made to ensure that it even builds on the other kernels. Users using a different distribution are encouraged to use the filesystem shipped with that distribution and not attempt to build OCFS2 1.4 on it. There is no reason to do so as 1.4 is simply a backport of the version in the mainline tree and has very little difference in functionality with it. Please note that as a rule, we apply bug fixes to all relevant kernel trees and not just the enterprise kernels. FEEDBACK The one known missing piece in this release is CDSL support. Our implementation in the 1.2 release was not accepted by the Linux kernel community. While we can always merge back the support for only the 1.4 release, we are aiming to make 1.4 release look as much as possible like mainline so as to make it easier for users to move between kernels/distributions. With that in mind, we are looking for feedback concerning the use of CDSL. As in, are you using CDSL? If so, how many files? Which type of files? Answers to the same will help us decide our approach to the solution. WHAT'S MISSING Apart from CDSL support, we still need to add tools support for data-in-inode, enable splice io and update the documentation. FUTURE PLANS While not set in stone, the list of "major" features that we are/will be working on looks something as follows: 1. Framework for Integrating with Userspace Clusterstack(s) This will allow OCFS2 to work with different userspace cluster stacks. We are aiming to to push changes for the same in the upcoming 2.6.26 kernel. 2. CMAN Integration In the coming months, the open source cluster stacks as distributed by Red Hat and SUSE, are likely to somewhat merge into a new CMAN that is layered atop OpenAIS. We will use the above framework to integrate OCFS2 with the new CMAN. Please note that we will continue to support the "classic" O2CB cluster stack and make the choice of cluster stack user configurable. For more information, please refer to this link. http://sources.redhat.com/cluster/wiki/HomePage 3. Extended Attributes This feature is currently in development. Once it is committed into the mainline tree, we will we decide whether to backport it to 1.4 or wait for SLES11 / (RH)EL6 to make it available on Enterprise distributions. 4. POSIX Locking While we managed to add support for clustered flock(2) (aka BSD locking) in 2.6.25, we are still looking to add support for its POSIX cousin. 5. Online Adding of Node-Slots This will allow users to increase the number of node-slots without having to umount the volume on all nodes. (Number of node slots dictates the number of nodes that can mount the volume concurrently.) 6. Online Defragmentation This will make the file system coalesce file data extents in order to boost performance. 7. Indexed Directories This will offer better performance during lookups in directories having more than 10,000 files. 8. JBD2 Support This will allow users to increase the volume sizes to 4PB. We will take on this task after ext4/jbd2 code base has somewhat stabilized. ==============================================================================