Tejas N. Bhise
2010-Mar-18 16:36 UTC
[Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS
Dear Community Users, Gluster is happy to announce the ALPHA release of the native NFS Server. The native NFS server is implemented as an NFS Translator and hence integrates very well, the NFS protocol on one side and GlusterFS protocol on the other side. This is an important step in our strategy to extend the benefits of Gluster to other operating system which can benefit from a better NFS based data service, while enjoying all the backend smarts that Gluster provides. The new NFS Server also strongly supports our efforts towards becoming a virtualization storage of choice. The release notes of the NFS ALPHA Release are available at - http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf The Release notes describe where RPMs and source code can be obtained and where bugs found in this ALPHA release can be filed. Some examples on usage are also provided. Please be aware that this is an ALPHA release and in no way should be used in production. Gluster is not responsible for any loss of data or service resulting from the use of this ALPHA NFS Release. Feel free to send feedback, comments and questions to: nfs-alpha at gluster.com Regards, Tejas Bhise.
hgichon
2010-Mar-19 02:17 UTC
[Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS
wow good news! thanks. I was installed source. but mount failed. my config is wrong? - kpkim root at ccc1:/usr/local/etc/glusterfs# mount -t glusterfs /usr/local/etc/glusterfs/nfs.vol /ABCD -o loglevel=DEBUG Volume 'nfs-server', line 60: type 'nfs/server' is not valid or not found on this machine error in parsing volume file /usr/local/etc/glusterfs/nfs.vol exiting root at ccc1:/usr/local/etc/glusterfs# [pid 4270] open("/usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so", O_RDONLY) = 7 [pid 4270] read(7, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\337\0\0\0\0\0\0"..., 832) = 832 [pid 4270] fstat(7, {st_mode=S_IFREG|0755, st_size=781248, ...}) = 0 [pid 4270] mmap(NULL, 2337392, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 7, 0) = 0x7f4be41bc000 my config ----------------------------------------------------------------------------------------- root at ccc1:/usr/local/etc/glusterfs# cat glusterfsd.vol ## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol) # Cmd line: # $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export 192.168.1.128:/export --nfs --cifs volume posix1 type storage/posix option directory /export end-volume volume locks1 type features/locks subvolumes posix1 end-volume volume brick1 type performance/io-threads option thread-count 8 subvolumes locks1 end-volume volume server-tcp type protocol/server option transport-type tcp option auth.addr.brick1.allow * option transport.socket.listen-port 6996 option transport.socket.nodelay on subvolumes brick1 end-volume ----------------------------------------------------------------------------------------- root at ccc1:/usr/local/etc/glusterfs# cat nfs.vol ## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol) # Cmd line: # $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export 192.168.1.128:/export --nfs --cifs volume 192.168.1.128-1 type protocol/client option transport-type tcp option remote-host 192.168.1.128 option transport.socket.nodelay on option transport.remote-port 6996 option remote-subvolume brick1 end-volume volume 192.168.1.127-1 type protocol/client option transport-type tcp option remote-host 192.168.1.127 option transport.socket.nodelay on option transport.remote-port 6996 option remote-subvolume brick1 end-volume volume distribute type cluster/distribute subvolumes 192.168.1.127-1 192.168.1.128-1 end-volume #volume writebehind # type performance/write-behind # option cache-size 4MB # subvolumes distribute #end-volume #volume readahead # type performance/read-ahead # option page-count 4 # subvolumes writebehind #end-volume volume iocache type performance/io-cache option cache-size 128MB option cache-timeout 1 subvolumes distribute end-volume #volume quickread # type performance/quick-read # option cache-timeout 1 # option max-file-size 64kB # subvolumes iocache #end-volume #volume statprefetch # type performance/stat-prefetch # subvolumes quickread #end-volume volume nfs-server type nfs/server subvolumes iocache option rpc-auth.addr.allow * end-volume Tejas N. Bhise wrote:> Dear Community Users, > > Gluster is happy to announce the ALPHA release of the native NFS Server. > The native NFS server is implemented as an NFS Translator and hence > integrates very well, the NFS protocol on one side and GlusterFS protocol > on the other side. > > This is an important step in our strategy to extend the benefits of > Gluster to other operating system which can benefit from a better NFS > based data service, while enjoying all the backend smarts that Gluster > provides. > > The new NFS Server also strongly supports our efforts towards > becoming a virtualization storage of choice. > > The release notes of the NFS ALPHA Release are available at - > > http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf > > The Release notes describe where RPMs and source code can be obtained > and where bugs found in this ALPHA release can be filed. Some examples > on usage are also provided. > > Please be aware that this is an ALPHA release and in no way should be > used in production. Gluster is not responsible for any loss of data > or service resulting from the use of this ALPHA NFS Release. > > Feel free to send feedback, comments and questions to: nfs-alpha at gluster.com > > Regards, > Tejas Bhise.
Shehjar Tikoo
2010-Mar-19 08:33 UTC
[Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS
----- "hgichon" <hgichon at gmail.com> wrote:> wow good news! > thanks. > > I was installed source. > but mount failed. > > my config is wrong? > > - kpkim > > root at ccc1:/usr/local/etc/glusterfs# mount -t glusterfs > /usr/local/etc/glusterfs/nfs.vol /ABCD -o loglevel=DEBUGHi The volfile containing the nfs/server translator needs to be started using the glusterfsd command, just like starting up the glusterfs server, not using the mount command. Exporting a GlusterFS volume as a NFS export does not require the volume to be mounted using the mount command. Thanks -Shehjar> Volume 'nfs-server', line 60: type 'nfs/server' is not valid or not > found on this machine > error in parsing volume file /usr/local/etc/glusterfs/nfs.vol > exiting > root at ccc1:/usr/local/etc/glusterfs# > > [pid 4270] > open("/usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so", > O_RDONLY) = 7 > [pid 4270] read(7, > "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\337\0\0\0\0\0\0"..., > 832) = 832 > [pid 4270] fstat(7, {st_mode=S_IFREG|0755, st_size=781248, ...}) = 0 > [pid 4270] mmap(NULL, 2337392, PROT_READ|PROT_EXEC, > MAP_PRIVATE|MAP_DENYWRITE, 7, 0) = 0x7f4be41bc000 > > my config > ----------------------------------------------------------------------------------------- > root at ccc1:/usr/local/etc/glusterfs# cat glusterfsd.vol > ## file auto generated by /usr/local/bin/glusterfs-volgen > (export.vol) > # Cmd line: > # $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export > 192.168.1.128:/export --nfs --cifs > > volume posix1 > type storage/posix > option directory /export > end-volume > > volume locks1 > type features/locks > subvolumes posix1 > end-volume > > volume brick1 > type performance/io-threads > option thread-count 8 > subvolumes locks1 > end-volume > > volume server-tcp > type protocol/server > option transport-type tcp > option auth.addr.brick1.allow * > option transport.socket.listen-port 6996 > option transport.socket.nodelay on > subvolumes brick1 > end-volume > ----------------------------------------------------------------------------------------- > root at ccc1:/usr/local/etc/glusterfs# cat nfs.vol > ## file auto generated by /usr/local/bin/glusterfs-volgen > (export.vol) > # Cmd line: > # $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export > 192.168.1.128:/export --nfs --cifs > > volume 192.168.1.128-1 > type protocol/client > option transport-type tcp > option remote-host 192.168.1.128 > option transport.socket.nodelay on > option transport.remote-port 6996 > option remote-subvolume brick1 > end-volume > > volume 192.168.1.127-1 > type protocol/client > option transport-type tcp > option remote-host 192.168.1.127 > option transport.socket.nodelay on > option transport.remote-port 6996 > option remote-subvolume brick1 > end-volume > > volume distribute > type cluster/distribute > subvolumes 192.168.1.127-1 192.168.1.128-1 > end-volume > > #volume writebehind > # type performance/write-behind > # option cache-size 4MB > # subvolumes distribute > #end-volume > > #volume readahead > # type performance/read-ahead > # option page-count 4 > # subvolumes writebehind > #end-volume > > volume iocache > type performance/io-cache > option cache-size 128MB > option cache-timeout 1 > subvolumes distribute > end-volume > > #volume quickread > # type performance/quick-read > # option cache-timeout 1 > # option max-file-size 64kB > # subvolumes iocache > #end-volume > > #volume statprefetch > # type performance/stat-prefetch > # subvolumes quickread > #end-volume > > volume nfs-server > type nfs/server > subvolumes iocache > option rpc-auth.addr.allow * > end-volume > > > > Tejas N. Bhise wrote: > > Dear Community Users, > > > > Gluster is happy to announce the ALPHA release of the native NFS > Server. > > The native NFS server is implemented as an NFS Translator and hence > > integrates very well, the NFS protocol on one side and GlusterFS > protocol > > on the other side. > > > > This is an important step in our strategy to extend the benefits of > > Gluster to other operating system which can benefit from a better > NFS > > based data service, while enjoying all the backend smarts that > Gluster > > provides. > > > > The new NFS Server also strongly supports our efforts towards > > becoming a virtualization storage of choice. > > > > The release notes of the NFS ALPHA Release are available at - > > > > > http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf > > > > The Release notes describe where RPMs and source code can be > obtained > > and where bugs found in this ALPHA release can be filed. Some > examples > > on usage are also provided. > > > > Please be aware that this is an ALPHA release and in no way should > be > > used in production. Gluster is not responsible for any loss of data > > or service resulting from the use of this ALPHA NFS Release. > > > > Feel free to send feedback, comments and questions to: > nfs-alpha at gluster.com > > > > Regards, > > Tejas Bhise. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Justice London
2010-Mar-19 21:09 UTC
[Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS
I'm sorry.. but I don't know how you guys tested this, but using a bare-bones configuration with the NFS translator and a mirror configuration between two systems (no performance translators, etc.) I can lock up the entire system after writing 160-180megs of data. Basically: dd if=/dev/full of=testfile bs=1M count=1000 is enough to lock the entire machine. This is on a CentOS 5.4 system with a xen backend (for testing). I don't know what you guys tested with, but I can't get this stable... at all. Justice London jlondon at lawinfo.com On Thu, 2010-03-18 at 10:36 -0600, Tejas N. Bhise wrote:> Dear Community Users, > > Gluster is happy to announce the ALPHA release of the native NFS Server. > The native NFS server is implemented as an NFS Translator and hence > integrates very well, the NFS protocol on one side and GlusterFS protocol > on the other side. > > This is an important step in our strategy to extend the benefits of > Gluster to other operating system which can benefit from a better NFS > based data service, while enjoying all the backend smarts that Gluster > provides. > > The new NFS Server also strongly supports our efforts towards > becoming a virtualization storage of choice. > > The release notes of the NFS ALPHA Release are available at - > > http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf > > The Release notes describe where RPMs and source code can be obtained > and where bugs found in this ALPHA release can be filed. Some examples > on usage are also provided. > > Please be aware that this is an ALPHA release and in no way should be > used in production. Gluster is not responsible for any loss of data > or service resulting from the use of this ALPHA NFS Release. > > Feel free to send feedback, comments and questions to: nfs-alpha at gluster.com > > Regards, > Tejas Bhise. > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
hgichon
2010-Mar-22 02:21 UTC
[Gluster-users] Announcement: Alpha Release of Native NFS for GlusterFS
Hello~ When I used NFS xlator, segmentation fault occurred. How can i resove this problem? - kpkim root at ccc1:/usr/local/etc/glusterfs# glusterfsd -f nfs1.vol --log-level=TRACE --------- [2010-03-22 11:14:15] D [glusterfsd.c:424:_get_specfp] glusterfs: loading volume file nfs1.vol [2010-03-22 11:14:15] T [spec.y:185:new_section] parser: New node for 'posix1' [2010-03-22 11:14:15] T [xlator.c:700:xlator_set_type] xlator: attempt to load file /usr/local/lib/glusterfs/nfsalpha1/xlator/storage/posix.so [2010-03-22 11:14:15] T [spec.y:211:section_type] parser: Type:posix1:storage/posix [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:posix1:directory:/export [2010-03-22 11:14:15] T [spec.y:324:section_end] parser: end:posix1 [2010-03-22 11:14:15] T [spec.y:185:new_section] parser: New node for 'server-tcp' [2010-03-22 11:14:15] T [xlator.c:700:xlator_set_type] xlator: attempt to load file /usr/local/lib/glusterfs/nfsalpha1/xlator/protocol/server.so [2010-03-22 11:14:15] T [spec.y:211:section_type] parser: Type:server-tcp:protocol/server [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:server-tcp:transport-type:tcp [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:server-tcp:auth.addr.posix1.allow:* [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:server-tcp:transport.socket.listen-port:6996 [2010-03-22 11:14:15] T [spec.y:309:section_sub] parser: child:server-tcp->posix1 [2010-03-22 11:14:15] T [spec.y:324:section_end] parser: end:server-tcp [2010-03-22 11:14:15] T [spec.y:185:new_section] parser: New node for '192.168.1.127-1' [2010-03-22 11:14:15] T [xlator.c:700:xlator_set_type] xlator: attempt to load file /usr/local/lib/glusterfs/nfsalpha1/xlator/protocol/client.so [2010-03-22 11:14:15] T [spec.y:211:section_type] parser: Type:192.168.1.127-1:protocol/client [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:192.168.1.127-1:transport-type:tcp [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:192.168.1.127-1:remote-host:192.168.1.127 [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:192.168.1.127-1:remote-port:6996 [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:192.168.1.127-1:remote-subvolume:posix1 [2010-03-22 11:14:15] T [spec.y:324:section_end] parser: end:192.168.1.127-1 [2010-03-22 11:14:15] T [spec.y:185:new_section] parser: New node for 'nfsd' [2010-03-22 11:14:15] T [xlator.c:700:xlator_set_type] xlator: attempt to load file /usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so [2010-03-22 11:14:15] D [xlator.c:745:xlator_set_type] xlator: dlsym(dumpops) on /usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so: undefined symbol: dumpops -- neglecting [2010-03-22 11:14:15] T [spec.y:211:section_type] parser: Type:nfsd:nfs/server [2010-03-22 11:14:15] T [spec.y:309:section_sub] parser: child:nfsd->192.168.1.127-1 [2010-03-22 11:14:15] T [spec.y:240:section_option] parser: Option:nfsd:rpc-auth.addr.allow:* [2010-03-22 11:14:15] T [spec.y:324:section_end] parser: end:nfsd ===============================================================================Version : glusterfs nfsalpha1 built on Mar 19 2010 10:15:12 git: v3.0.0-143-gafc1635 Starting Time: 2010-03-22 11:14:15 Command line : glusterfsd -f nfs1.vol --log-level=TRACE PID : 6219 System name : Linux Nodename : ccc1 Kernel Release : 2.6.31-20-server Hardware Identifier: x86_64 Given volfile: +------------------------------------------------------------------------------+ 1: ## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol) 2: # Cmd line: 3: # $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export 192.168.1.128:/export --nfs --cifs 4: 5: volume posix1 6: type storage/posix 7: option directory /export 8: end-volume 9: 10: volume server-tcp 11: type protocol/server 12: option transport-type tcp 13: option auth.addr.posix1.allow * 14: option transport.socket.listen-port 6996 15: subvolumes posix1 16: end-volume 17: 18: volume 192.168.1.127-1 19: type protocol/client 20: option transport-type tcp 21: option remote-host 192.168.1.127 22: option remote-port 6996 23: option remote-subvolume posix1 24: end-volume 25: 26: volume nfsd 27: type nfs/server 28: subvolumes 192.168.1.127-1 29: option rpc-auth.addr.allow * 30: end-volume 31: +------------------------------------------------------------------------------+ [2010-03-22 11:14:15] D [glusterfsd.c:1374:main] glusterfs: running in pid 6219 [2010-03-22 11:14:15] T [rpcsvc-auth.c:106:rpcsvc_auth_init_auth] rpc-service: Authentication enabled: AUTH_UNIX [2010-03-22 11:14:15] T [rpcsvc-auth.c:106:rpcsvc_auth_init_auth] rpc-service: Authentication enabled: AUTH_NULL [2010-03-22 11:14:15] T [rpcsvc.c:86:rpcsvc_stage_init] rpc-service: event pool size: 15360 [2010-03-22 11:14:15] D [rpcsvc.c:167:rpcsvc_init] rpc-service: RPC service inited. [2010-03-22 11:14:15] T [nfs.c:342:nfs_init_subvolumes] nfs: inode table lru: 90000 [2010-03-22 11:14:15] D [nfs.c:346:nfs_init_subvolumes] nfs: Initing subvolume: 192.168.1.127-1 [2010-03-22 11:14:15] T [nfs.c:365:nfs_init_subvolumes] nfs: Inited volumes: 1 [2010-03-22 11:14:15] D [nfs.c:499:init] nfs: NFS service started [2010-03-22 11:14:15] D [client-protocol.c:6603:init] 192.168.1.127-1: defaulting frame-timeout to 30mins [2010-03-22 11:14:15] D [client-protocol.c:6614:init] 192.168.1.127-1: defaulting ping-timeout to 42 [2010-03-22 11:14:15] D [transport.c:145:transport_load] transport: attempt to load file /usr/local/lib/glusterfs/nfsalpha1/transport/socket.so [2010-03-22 11:14:15] D [xlator.c:285:_volume_option_value_validate] 192.168.1.127-1: no range check required for 'option remote-port 6996' [2010-03-22 11:14:15] D [transport.c:145:transport_load] transport: attempt to load file /usr/local/lib/glusterfs/nfsalpha1/transport/socket.so [2010-03-22 11:14:15] D [xlator.c:285:_volume_option_value_validate] 192.168.1.127-1: no range check required for 'option remote-port 6996' [2010-03-22 11:14:15] D [transport.c:145:transport_load] transport: attempt to load file /usr/local/lib/glusterfs/nfsalpha1/transport/socket.so [2010-03-22 11:14:15] D [xlator.c:285:_volume_option_value_validate] server-tcp: no range check required for 'option transport.socket.listen-port 6996' [2010-03-22 11:14:15] T [socket.c:243:__socket_nodelay] : NODELAY enabled for socket 8 [2010-03-22 11:14:15] T [server-protocol.c:6655:init] server-tcp: defaulting limits.transaction-size to 4194304 [2010-03-22 11:14:15] T [nfs.c:515:notify] nfs: Notification received: 1 [2010-03-22 11:14:15] T [posix.c:1448:posix_janitor_thread_proc] posix1: janitor cleaning out /.landfill [2010-03-22 11:14:15] D [client-protocol.c:7027:notify] 192.168.1.127-1: got GF_EVENT_PARENT_UP, attempting connect on transport [2010-03-22 11:14:15] T [client-protocol.c:6319:client_protocol_reconnect] 192.168.1.127-1: attempting reconnect [2010-03-22 11:14:15] T [common-utils.c:107:gf_resolve_ip6] resolver: DNS cache not present, freshly probing hostname: 192.168.1.127 [2010-03-22 11:14:15] T [common-utils.c:148:gf_resolve_ip6] resolver: returning ip-192.168.1.127 (port-6996) for hostname: 192.168.1.127 and port: 6996 [2010-03-22 11:14:15] D [client-protocol.c:7027:notify] 192.168.1.127-1: got GF_EVENT_PARENT_UP, attempting connect on transport [2010-03-22 11:14:15] T [client-protocol.c:6319:client_protocol_reconnect] 192.168.1.127-1: attempting reconnect [2010-03-22 11:14:15] T [common-utils.c:107:gf_resolve_ip6] resolver: DNS cache not present, freshly probing hostname: 192.168.1.127 [2010-03-22 11:14:15] T [common-utils.c:148:gf_resolve_ip6] resolver: returning ip-192.168.1.127 (port-6996) for hostname: 192.168.1.127 and port: 6996 [2010-03-22 11:14:15] T [socket.c:243:__socket_nodelay] : NODELAY enabled for socket 10 [2010-03-22 11:14:15] T [nfs.c:515:notify] nfs: Notification received: 1 [2010-03-22 11:14:15] D [client-protocol.c:7027:notify] 192.168.1.127-1: got GF_EVENT_PARENT_UP, attempting connect on transport [2010-03-22 11:14:15] T [client-protocol.c:6319:client_protocol_reconnect] 192.168.1.127-1: attempting reconnect [2010-03-22 11:14:15] T [socket.c:995:socket_connect] 192.168.1.127-1: connect () called on transport already connected [2010-03-22 11:14:15] D [client-protocol.c:7027:notify] 192.168.1.127-1: got GF_EVENT_PARENT_UP, attempting connect on transport [2010-03-22 11:14:15] T [client-protocol.c:6319:client_protocol_reconnect] 192.168.1.127-1: attempting reconnect [2010-03-22 11:14:15] T [socket.c:995:socket_connect] 192.168.1.127-1: connect () called on transport already connected [2010-03-22 11:14:15] N [glusterfsd.c:1400:main] glusterfs: Successfully started [2010-03-22 11:14:15] T [socket.c:243:__socket_nodelay] : NODELAY enabled for socket 11 [2010-03-22 11:14:15] D [client-protocol.c:7041:notify] 192.168.1.127-1: got GF_EVENT_CHILD_UP [2010-03-22 11:14:15] T [socket.c:995:socket_connect] 192.168.1.127-1: connect () called on transport already connected [2010-03-22 11:14:15] D [client-protocol.c:7041:notify] 192.168.1.127-1: got GF_EVENT_CHILD_UP [2010-03-22 11:14:15] T [socket.c:995:socket_connect] 192.168.1.127-1: connect () called on transport already connected [2010-03-22 11:14:15] T [socket.c:243:__socket_nodelay] : NODELAY enabled for socket 12 [2010-03-22 11:14:15] D [addr.c:190:gf_auth] posix1: allowed = "*", received addr = "192.168.1.127" [2010-03-22 11:14:15] N [server-protocol.c:5852:mop_setvolume] server-tcp: accepted client from 192.168.1.127:1023 [2010-03-22 11:14:15] T [server-protocol.c:5895:mop_setvolume] server-tcp: creating inode table with lru_limit=1024, xlator=posix1 [2010-03-22 11:14:15] D [addr.c:190:gf_auth] posix1: allowed = "*", received addr = "192.168.1.127" [2010-03-22 11:14:15] N [server-protocol.c:5852:mop_setvolume] server-tcp: accepted client from 192.168.1.127:1022 [2010-03-22 11:14:15] W [client-protocol.c:6237:client_setvolume_cbk] 192.168.1.127-1: attaching to the local volume 'posix1' [2010-03-22 11:14:15] N [client-protocol.c:6246:client_setvolume_cbk] 192.168.1.127-1: Connected to 192.168.1.127:6996, attached to remote volume 'posix1'. [2010-03-22 11:14:15] T [nfs.c:515:notify] nfs: Notification received: 5 [2010-03-22 11:14:15] D [nfs.c:259:nfs_startup_subvolume] nfs: Starting up: 192.168.1.127-1 [2010-03-22 11:14:15] T [nfs-fops.c:268:nfs_fop_lookup] nfs: Lookup: / pending frames: patchset: v3.0.0-143-gafc1635 signal received: 11 time of crash: 2010-03-22 11:14:15 configuration details: argp 1 backtrace 1 dlfcn 1 fdatasync 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs nfsalpha1 /lib/libc.so.6[0x7fdb4e6dd530] /lib/libpthread.so.0(pthread_spin_lock+0x0)[0x7fdb4ea24eb0] /usr/local/lib/libglusterfs.so.0(mem_get+0x1a)[0x7fdb4ee69f8a] /usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so(nfs_fop_local_init+0x29)[0x7fdb4d61a0a9] /usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so(nfs_fop_lookup+0x7c)[0x7fdb4d61d69c] /usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so(nfs_startup_subvolume+0x1c0)[0x7fdb4d618bd0] /usr/local/lib/glusterfs/nfsalpha1/xlator/nfs/server.so(notify+0x93)[0x7fdb4d619003] /usr/local/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7fdb4ee4e9b3] /usr/local/lib/glusterfs/nfsalpha1/xlator/protocol/client.so(protocol_client_post_handshake+0x112)[0x7fdb4d85fa32] /usr/local/lib/glusterfs/nfsalpha1/xlator/protocol/client.so(client_setvolume_cbk+0x193)[0x7fdb4d85fbe3] /usr/local/lib/glusterfs/nfsalpha1/xlator/protocol/client.so(protocol_client_pollin+0xca)[0x7fdb4d84efda] /usr/local/lib/glusterfs/nfsalpha1/xlator/protocol/client.so(notify+0xe8)[0x7fdb4d855848] /usr/local/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7fdb4ee4e9b3] /usr/local/lib/glusterfs/nfsalpha1/transport/socket.so(socket_event_handler+0xc8)[0x7fdb4c97c568] /usr/local/lib/libglusterfs.so.0[0x7fdb4ee692dd] glusterfsd(main+0x862)[0x404502] /lib/libc.so.6(__libc_start_main+0xfd)[0x7fdb4e6c8abd] glusterfsd[0x402ab9] --------- Tejas N. Bhise wrote:> Dear Community Users, > > Gluster is happy to announce the ALPHA release of the native NFS Server. > The native NFS server is implemented as an NFS Translator and hence > integrates very well, the NFS protocol on one side and GlusterFS protocol > on the other side. > > This is an important step in our strategy to extend the benefits of > Gluster to other operating system which can benefit from a better NFS > based data service, while enjoying all the backend smarts that Gluster > provides. > > The new NFS Server also strongly supports our efforts towards > becoming a virtualization storage of choice. > > The release notes of the NFS ALPHA Release are available at - > > http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-alpha/GlusterFS_NFS_Alpha_Release_Notes.pdf > > The Release notes describe where RPMs and source code can be obtained > and where bugs found in this ALPHA release can be filed. Some examples > on usage are also provided. > > Please be aware that this is an ALPHA release and in no way should be > used in production. Gluster is not responsible for any loss of data > or service resulting from the use of this ALPHA NFS Release. > > Feel free to send feedback, comments and questions to: nfs-alpha at gluster.com > > Regards, > Tejas Bhise.