Patrick Kelly
2008-Oct-13 15:50 UTC
[Ocfs2-users] How are people using OCFS2 - any limitations
University of California, Davis, runs a campus sakai application. Sakai is a group of modules that provides online courseware to the campus. The application is supported by universities worldwide. We are running sakai using RHEL 4.0. We have six application servers that retrieve and update files stored in our AFS (Andrew File System) infrastructure service. There is currently about 500GB of data stored there, and we anticipate this will grow to 1TB over the next year or two. We are considering moving that data to an OCFS2 file system on an EMC SAN, using fiber channel connections to the application servers. Can anyone give us some idea of their usage? How large have the file systems grown? How many nodes are connected? Any issues with expansion? Any information would be appreciated. Thanks, Patrick Kelly Manager, Campus Data Center UC Davis -------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20081013/0fe9bf31/attachment.html
Sunil Mushran
2008-Oct-13 20:41 UTC
[Ocfs2-users] How are people using OCFS2 - any limitations
From the dev point of view, make sure you use ocfs2 1.4. It would mean upgrading the servers to (RH)EL5 U2 (or SLES10 SP2). I'll let actual users answer the qs you have asked. Patrick Kelly wrote:> > University of California, Davis, runs a campus sakai application. > Sakai is a group of modules that provides online courseware to the > campus. The application is supported by universities worldwide. > > > > We are running sakai using RHEL 4.0. We have six application servers > that retrieve and update files stored in our AFS (Andrew File System) > infrastructure service. There is currently about 500GB of data stored > there, and we anticipate this will grow to 1TB over the next year or > two. We are considering moving that data to an OCFS2 file system on an > EMC SAN, using fiber channel connections to the application servers. > > > > Can anyone give us some idea of their usage? How large have the file > systems grown? How many nodes are connected? Any issues with > expansion? Any information would be appreciated. > > > > Thanks, > > > > Patrick Kelly > > Manager, Campus Data Center > > UC Davis > > ------------------------------------------------------------------------ > > _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users at oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs2-users
Henrik Carlqvist
2008-Oct-13 21:14 UTC
[Ocfs2-users] How are people using OCFS2 - any limitations
On Mon, 13 Oct 2008 08:50:33 -0700 Patrick Kelly <pjkelly at ucdavis.edu> wrote:> We are running sakai using RHEL 4.0. We have six application servers > that retrieve and update files stored in our AFS (Andrew File System) > infrastructure service. There is currently about 500GB of data stored > there, and we anticipate this will grow to 1TB over the next year or > two. We are considering moving that data to an OCFS2 file system on an > EMC SAN, using fiber channel connections to the application servers. > > Can anyone give us some idea of their usage? How large have the file > systems grown? How many nodes are connected? Any issues with expansion? > Any information would be appreciated.About two years ago I made an attempt to get ocfs2 working on two dell servers connected to an EMC SAN. If I remember right they were connected by a brocade FC switch to their single channel Qlogic FC cards. The purpose of this 2-node configuration was to build an active-active HA NFS server. I don't remember for sure, but I think that they were running RHEL 4.0, but it could also have been RHEL 5.0. We did choose this solution as my company wanted commersial support from trusted vendors. Dell made the installation and configuration of the RedHat servers and they had also delivered the EMC SAN. We had a number of problems with that configuration. Sometimes the computers lost their connections to the SAN. For reasons we never found out one of the servers sometimes rebooted followed by a reboot also by the other server. The servers also had a closed-source software colled Dell Powerpath installed. That software might be useful with dual channel HBA cards, but we only had single channel HBA cards. It turned out that the version of Powerpath that we had installed reduced the disk bandwidth to less than half the bandwidth of raw access to the disks. Dell gave us support and tried to solve our problems, but eventually we came to a point were our system still wasn't usable and continued support would cost far too much. We had to give up that configuration. Since almost a year the two dell servers now run Slackware 12.0 and the qlogic cards are directly connected to an easyraid disk array. We have this system working fine as our 2-node HA NFS server. I don't know for sure which replacement made it work, but we have done the following changes: Slackware 12.0 instead of RHEL No closed-source modules tainting the kernel No Dell Powerpath installed HBA cards directly connected to raid array without any FC switch Easyraid disk array with IDE disks instead of EMC with FC disks The servers share about a total of 900 GB with NFS and smb, but those 900 MB are split into different partitions. My configuration has some differences, but also some similarities with your configuration. I hope that you will find my experiences useful. regards Henrik -- NOTE: Dear Outlook users: Please remove me from your address books. Read this article and you know why: http://newsforge.com/article.pl?sid=03/08/21/143258