this is not possible (yet!). I believe this sort of option is being explored. To add existing data to gluster, you will need to use the gluster client and copy the data from your current NFS mounted folder over to the gluster client mounted folder. -- Adam Brenner Computer Science, Undergraduate Student Donald Bren School of Information and Computer Sciences Research Computing Support Office of Information Technology http://www.oit.uci.edu/rcs/ University of California, Irvine www.ics.uci.edu/~aebrenne/ aebrenne at uci.edu On Thu, Sep 13, 2012 at 1:48 PM, Lonni J Friedman <netllama at gmail.com> wrote:> Yup, that worked, thanks. Sorry if this was something obvious, I > didn''t see any mention of this concept/requirement in the > documentation. > > What''s the correct way to bring up a pre-existing NFS server inside of > glusterfs, so that its replicated to some new server? It would be > somewhat hacky to have to write data just to get everything > replicated. > > On Thu, Sep 13, 2012 at 1:21 PM, John Mark Walker <johnmark at redhat.com> wrote: >> If you wrote directly to the bricks instead of via a glusterfs mountpoint, >> then you''re missing xattrs, which confuses glusterfs. It can tell there''s >> something there, but without xattrs it ''doesn''t compute.'' >> >> I don''t think that bug is related to your issue at all. >> >> >> >> >> -------- Original Message -------- >> From: Gerald Brandt >> Sent: Thu, Sep 13, 2012 03:24 PM >> To: Lonni J Friedman >> CC: gluster-users at gluster.org >> Subject: Re: [Gluster-users] problems with replication & NFS >> >> Hi, >> >> You need to write to the gluster mounted partition, not the XFS mounted one. >> >> Gerald >> >> >> ----- Original Message ----- >>> Greetings, >>> I''m trying to setup a small glusterFS test cluster, in order to gauge >>> the feasibility for using it in a large production environment. I''ve >>> been working through the official Admin Guide >>> (Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf) along with >>> the website setup instructions ( >>> >>> http://www.gluster.org/community/documentation/index.php/Getting_started_overview >>> ). >>> >>> What I have are two Fedora16-x86_64 servers, with a 20GB XFS >>> formatted >>> partition set aside as bricks. I''m using version 3.3.0. I setup >>> each >>> for replication, and it seems like its setup & working: >>> #### >>> $ gluster volume info gv0 >>> >>> Volume Name: gv0 >>> Type: Replicate >>> Volume ID: 6c9fbbc7-e382-4f26-afae-60f8658207c5 >>> Status: Started >>> Number of Bricks: 1 x 2 = 2 >>> Transport-type: tcp >>> Bricks: >>> Brick1: 10.31.99.166:/mnt/sdb1 >>> Brick2: 10.31.99.165:/mnt/sdb1 >>> #### >>> >>> This is where my problems begin. I assumed that if replication was >>> truly working, then any changes to the contents of /mnt/sdb1 on one >>> brick would automatically get replicated to the other brick. >>> However, >>> that isn''t happening. In fact, nothing seems to be happening. I''ve >>> added new files, changed pre-existing, yet none of it ever replicates >>> to the other brick. Both bricks were empty prior to formatting the >>> filesystem and setting them up for this test instance. Surely I must >>> be missing something obvious, as something this fundamental & basic >>> must work, right? >>> >>> Next problem is that my production environment would need to access >>> the volume via NFS (rather than ''native'' gluster). I had a 3rd >>> system >>> setup (also with Fedora16-x86_64), and was able to successfully NFS >>> mount the gluster volume. Or so I thought. When I attempted to >>> simply look at the files on the mount point (using ''ls''), it seemed >>> to >>> work at first, but then shortly afterwards, it failed with a cryptic >>> "Invalid argument" error. So I manually unmounted, then remounted, >>> and tried again. Once again, it worked ok for a few seconds, then >>> died again with the same "Invalid argument" error: >>> ######## >>> [root at cuda-fs3 basebackups]# mount -t nfs -o vers=3,mountproto=tcp >>> 10.31.99.165:/gv0 /mnt/gv0 >>> [root at cuda-fs3 basebackups]# ls -l /mnt/gv0/ >>> total 8 >>> -rw-r--r-- 0 root root 6670 Sep 13 10:21 foo1 >>> [root at cuda-fs3 basebackups]# ls -l /mnt/gv0/ >>> total 8 >>> -rw-r--r-- 0 root root 6670 Sep 13 10:21 foo1 >>> [root at cuda-fs3 basebackups]# ls -l /mnt/gv0/ >>> ls: cannot access /mnt/gv0/foo1: Invalid argument >>> total 0 >>> -????????? ? ? ? ? ? foo1 >>> ######## >>> >>> The duration between the mount command invocation and the failed ''ls'' >>> command was literally about 5 seconds. I have numerous other >>> traditional NFS mounts that work just fine. Its only the gluster >>> volume that exhibits this behavior. I did some googling, and this >>> bug >>> seems to match my problem exactly: >>> https://bugzilla.redhat.com/show_bug.cgi?id=800755 >>> >>> I can''t quite tell from the bug whether its actually fixed in the >>> released 3.3.0, or not. Can someone clarify whether NFS is supposed >>> to work in 3.3.0 ? Am I doing something wrong? >>> > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >