similar to: windows live mail + dovecot and nfs

Displaying 20 results from an estimated 700 matches similar to: "windows live mail + dovecot and nfs"

2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > I applied the workarround for this bug and now df shows the right size: > > That is good to hear. > [root at stor1 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 > /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > My initial setup was composed of 2 similar nodes: stor1data and stor2data. > A month ago I expanded both volumes with a new node: stor3data (2 bricks > per volume). > Of course, then to add the new peer with the bricks I did the 'balance > force' operation.
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message. Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- -----------
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, I applied the workarround for this bug and now df shows the right size: [root at stor1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1 stor1data:/volumedisk0 101T 3,3T 97T 4% /volumedisk0 stor1data:/volumedisk1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume). Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar . For volumedisk1 I
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. The "shared-brick-count" values seem fine on stor1. Please send us "grep -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes so we can check if they are the cause. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2017 Jun 15
1
How to expand Replicated Volume
Hi Nag Pavan Chilakam Can I use this command "gluster vol add-brick vol1 replica 2 file01g:/brick3/data/vol1 file02g:/brick4/data/vol1" in both file server 01 and 02 exited without add new servers. Is it ok for expanding volume? Thanks for your support Regards, Giang 2017-06-14 22:26 GMT+07:00 Nag Pavan Chilakam <nag.chilakam at gmail.com>: > Hi, > You can use add-brick
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi, Some days ago all my glusterfs configuration was working fine. Today I realized that the total size reported by df command was changed and is smaller than the aggregated capacity of all the bricks in the volume. I checked that all the volumes status are fine, all the glusterd daemons are running, there is no error in logs, however df shows a bad total size. My configuration for one volume:
2013 Jan 03
0
Resolve brick failed in restore
Hi, I have a lab with 10 machines acting as storage servers for some compute machines, using glusterfs to distribute the data as two volumes. Created using: gluster volume create vol1 192.168.10.{221..230}:/data/vol1 gluster volume create vol2 replica 2 192.168.10.{221..230}:/data/vol2 and mounted on the client and server machines using: mount -t glusterfs 192.168.10.221:/vol1 /mnt/vol1 mount
2005 Apr 08
0
windows copy versus move
Running 3.0.4 on FreeBSD 5.2.1... I have two directories... drwxrwxr-x root data_current /usr/vol1/current drwxrwx--- root data_current /usr/vol1/hold If a file is in /usr/vol1/hold with the following attributes... -rwxrwx--- root data_hold file1 ...and a user MOVES it to /usr/vol1/current it has the following attributes... -rwxrwx--- root data_hold file1 ...if the user COPIES
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi, I have a cluster of 10 servers all running Fedora 24 along with Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with Gluster 3.12. I saw the documentation and did some testing but I would like to run my plan through some (more?) educated minds. The current setup is: Volume Name: vol0 Distributed-Replicate Number of Bricks: 2 x (2 + 1) = 6 Bricks: Brick1:
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> wrote: > Hi, > > I have a cluster of 10 servers all running Fedora 24 along with > Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with > Gluster 3.12. I saw the documentation and did some testing but I > would like to run my plan through some (more?) educated minds. >
2003 Jun 04
1
rsync not overwriting files on destination
Hi, I am rsyncing from my source server A to a destination server B. A/vol1 contains two files syslog.txt and syslog.bak B/vol1 contains five files syslog.txt, syslog.bak, initlog.txt, internal.txt, and internal.bak. I want to preserve the 5 files on B/vol1 when I do rsync from A to B. Here is the command I use: rsync -av --delete --exclude-from=EXCLUDEFILE A/ B I've tried the option
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone
2007 Feb 24
1
zfs received vol not appearing on iscsi target list
Just installed Nexenta and I''ve been playing around with zfs. root at hzsilo:/tank# uname -a SunOS hzsilo 5.11 NexentaOS_20070105 i86pc i386 i86pc Solaris root at hzsilo:/tank# zfs list NAME USED AVAIL REFER MOUNTPOINT home 89.5K 219G 32K /export/home tank 330K 1.78T 51.9K /tank tank/iscsi_luns 147K
1999 Dec 13
1
file-permition samba vrs novell !
hi there ! I want to change a novell-netware server to a samba server, and there i have a big problem with the file-permitions ! we use a big database-system. the clients searches for the programs and for the data's on the same drive, so i must have one share with different rights (rwx ... for data and rx for programs) the directory structure is the following: vol1\ \flex31d
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
Hi, I've got troubles after few minutes of glusterfs operations. I setup a 4-node replica 4 storage, with 2 bricks on every server: # gluster volume create vms replica 4 transport tcp 192.168.7.1:/srv/vol1 192.168.7.2:/srv/vol1 192.168.7.3:/srv/vol1 192.168.7.4:/srv/vol1 192.168.7.1:/srv/vol2 192.168.7.2:/srv/vol2 192.168.7.3:/srv/vol2 192.168.7.4:/srv/vol2 I started copying files with
2017 Jul 11
2
Public file share Samba 4.6.5
I am trying to configure a public file share on \\fs1\vol1 From a Windows 7 command prompt, I enter: dir \\fs1\vol1 Windows says: Logon failure: unknown user name or bad password. Where am I going wrong? Error log says: " SPNEGO login failed: NT_STATUS_NO_SUCH_USER" - that must have something to do with this, but I thought that was the point of "map to guest = Bad User"
2007 Jul 13
1
do we support zonepath on UFS formated ZFS volume
Hi, ZFS experts, From ZFS release notes, " Solaris 10 6/06 and Solaris 10 11/06: Do Not Place the Root File Systemof a Non-Global Zone on ZFS. The zonepath of a non-global zone should not reside on ZFS for this release. This action might result in patching problems and possibly prevent the system from being upgraded to a later Solaris 10 update release." So my question is, do we