search for: posix1e

Displaying 11 results from an estimated 11 matches for "posix1e".

Did you mean: posix1
2010 Mar 04
1
[3.0.2] booster + unfsd failed
Hi list. I have been testing with glusterfs-3.0.2. glusterfs mount works well. unfsd on glusterfs mount point works well too. When using booster, unfsd realpath check failed. But ls util works well. I tried 3.0.0-git head source build but result was same. My System is Ubuntu 9.10 and using unfsd source from official gluster download site. Any comment appreciated!! - kpkim root at
2020 Sep 16
1
Internal error on Samba 4.10.17
...upgrade to samba411-4.11.11. Just trying and monitoring. >> Thanks you. > > > If /mnt/DAT resizes on ZFS, then "vfs objects = zfsacl" must be set. > Hmm... looks like an autocorrect fail. If /mnt/DAT is a zpool, then you _must_ set "zfsacl". Attempts to set a POSIX1e ACL will fail with EINVAL. Typically in FreeBSD we use pathconf(2) to discover the ACL branding for the underlying filesystem, and then branding-aware syscalls. These are the sorts of os-specific nuances that we try to address through VFS modules. In this case, vfs_zfsacl will do mostly the right t...
2009 Jun 26
0
Error when expand dht model volumes
HI all: I met a problem in expending dht volumes, i write in a dht storage directory untile it grew up to 90%,so i add four new volumes into the configur file. But when start again ,the data in directory some disappeared ,Why ??? Was there a special action before expending the volumes? my client cofigure file is this : volume client1 type protocol/client option transport-type
2011 May 14
0
Data is Copying when a new brick is added.
== Data Copying when a new brick is added. == Hi. I'm a first time glusterfs user, and I'm trying to simulate what will happen when I need to add more bricks for storage capacity. My config files are below but I'll try and explain what is going on. I have 2 machines with 2 hard drives in each. I created a replicated storage system where machine a replicates to machine b.
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
Hi there, Im running glusterfs version 3.1.0. The client crashed after sometime with below stack. 2011-01-13 08:33:49.230976] I [afr-common.c:2568:afr_notify] replicate-1: Subvolume 'distribute-1' came back up; going online. [2011-01-13 08:33:49.499909] I [afr-open.c:393:afr_openfd_sh] replicate-1: data self-heal triggered. path: /streaming/set3/work/reduce.12.1294902171.dplog.temp,
2008 Dec 18
3
Feedback and Questions on afr+unify
Hi, I just installed and configured a couple of machines with glusterfs (1.4.0-rc3). It seems to work great. Thanks for the amazing software.! I've been looking for something like this for years. I have some feedback and questions. My configuration is a bit complicated. I have two machines each with two disks and each of which with two partitions that I wanted to use (i.e. 8
2000 Oct 27
0
Segfault in 2.2.0p1 due to connect() changes in Linux 2.4
Hello, I upgraded (?) one of my machines to Linux kernel 2.4.0-test9, and sshd started failing. Specifically, the sshd child processes would segfault if a user requested X11 forwarding. I tracked the problem down to these bits of code: channels.c, x11_create_display_inet, line 1738: sock = socket(ai->ai_family, SOCK_STREAM, 0); if (sock < 0) { if (errno != EINVAL) {
2020 Sep 16
3
Internal error on Samba 4.10.17
On 9/16/20 2:27 AM, Andrew Walker wrote: > > > On Tue, Sep 15, 2020 at 2:58 PM Budi Janto via samba > <samba at lists.samba.org <mailto:samba at lists.samba.org>> wrote: > > Hi, > > For 3 days uptime serve about 40 client Windows workstation with traffic > average 50 Mbps - 80 Mbps (Video streaming) running on FreeBSD system > with 16
2010 May 04
1
Posix warning : Access to ... is crossing device
I have a distributed/replicated setup with Glusterfs 3.0.2, that I'm testing on 4 servers, each with access to /mnt/gluster (which consists of all directories /mnt/data01 - data24) on each server. I'm using configs I built from volgen, but every time I access a file (via an 'ls -l') for the first time, I get all of these messages in my logs on each server: [2010-05-04 10:50:30] W
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users 01 | 02 mirrored --| 03 | 04 mirrored --| distributed 05 | 06 mirrored --| 1) Would this command work for that? glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01 clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01 clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01 clustr-06:/mnt/data01 So the
2010 Mar 15
1
Glusterfs 3.0.X crashed on Fedora 11
the glusterfs 3.0.X crashed on Fedora 12, it got buffer overflow, seems fine on Fedora 11 Name : fuse Arch : x86_64 Version : 2.8.1 Release : 4.fc12 Name : glibc Arch : x86_64 Version : 2.11.1 Release : 1 complete log: ====================================================================================================== [root at test_machine06 ~]# glusterfsd