similar to: wide softlink to different partition copy fails

Displaying 20 results from an estimated 100 matches similar to: "wide softlink to different partition copy fails"

2013 Dec 11
0
wide symlinks across partitions
Hi samba list, I am seeing a peculiar issue when sharing out a directory containing a soft symlink pointing to a directory outside of the share and a different filesystem/block device. i have 2 shares: [share1] read only = No follow symlinks = yes wide links = yes /media/stor0/user [share2] /media/stor1/user /media/stor0 is an xfs filesystem /media/stor1 is an xfs filesystem there is a folder
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello, I''m trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1 [root@blade7 stor1]# df -h /dev/mapper/gluster_fast
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. The "shared-brick-count" values seem fine on stor1. Please send us "grep -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes so we can check if they are the cause. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message. Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- -----------
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > I applied the workarround for this bug and now df shows the right size: > > That is good to hear. > [root at stor1 ~]# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 > /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi Nithya, > > My initial setup was composed of 2 similar nodes: stor1data and stor2data. > A month ago I expanded both volumes with a new node: stor3data (2 bricks > per volume). > Of course, then to add the new peer with the bricks I did the 'balance > force' operation.
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, I applied the workarround for this bug and now df shows the right size: [root at stor1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0 /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1 stor1data:/volumedisk0 101T 3,3T 97T 4% /volumedisk0 stor1data:/volumedisk1
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi, Some days ago all my glusterfs configuration was working fine. Today I realized that the total size reported by df command was changed and is smaller than the aggregated capacity of all the bricks in the volume. I checked that all the volumes status are fine, all the glusterd daemons are running, there is no error in logs, however df shows a bad total size. My configuration for one volume:
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume). Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar . For volumedisk1 I
2008 Aug 07
0
crash : 2.7.0 after installing it (PR#12143)
--_e6765d9c-a0eb-4881-9b60-b9820e4c5665_ Content-Type: multipart/alternative; boundary="_520e6e4d-4f2c-47c0-83c1-2fa2fb28184f_" --_520e6e4d-4f2c-47c0-83c1-2fa2fb28184f_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Dear Murdoch=2C Attached file is the error message after installing 2.7.1/ or it patched wi= ndow version. After
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya, Below the output of both volumes: [root at stor1t ~]# gluster volume rebalance volumedisk1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------
2014 Feb 19
1
Problems with Windows on KVM machine
Hello. We've started a virtualisation project and got stuck in one moment. Currently we are using the following: Intel 2312WPQJR as a node Intel R2312GL4GS as a storage with Intel Infiniband 2 ports controller Infiniband Mellanox SwitchX IS5023 for commutation. The nodes run CentOS 6.5 with built-in Infiniband package (Linux v0002 2.6.32-431.el6.x86_64), the storage - CentOS 6.4 also built-in
2007 Mar 09
5
memory leak in index build?
I have a script (below) which attempts to make an index out of all the man pages on my system. It takes a while, mostly because it runs man over and over... but anyway, as time goes on the memory usage goes up and up and never down. Eventually, it runs out of ram and just starts thrashing up the swap space, pretty much grinding to a halt. The workaround would seem to be to index documents in
2017 Jun 09
2
Gluster deamon fails to start
This is in relation to bringing up a new oVirt environment. When I was bringing up oVirt (Hosted-Engine), I had the Glusterd service stopped, since the server (I have its name as GSAoV07) was initially only going to be used to host the oVirt engine as a hypervisor (it will have Gluster volumes in the near future). I have three other servers I am using for storage. One of those three is also going
2017 Jun 12
0
Gluster deamon fails to start
glusterd failed to resolve addresses of the bricks. This could happen either because of glusterd was trying to resolve the address of the brick before the N/W interface was up or a change in the IP? If it's the latter then a point to note is that it's recommended that cluster should be formed using hostname otherwise if the server goes through IP change we have to change the configuration
2018 May 18
0
glusterfs 3.6.5 and selfheal
Hi, Am running glusterfs server with replicated volume for qemu-kvm (proxmox) VM storerage which is mounted using libgfapi module. The servers are running network with mtu 9000 and client is not (yet). The question I've got is this: Is it normal to see this kind of an output: gluster volume heal HA-100G-POC-PVE info Brick stor1:/exports/HA-100G-POC-PVE/100G/ /images/100/vm-100-disk-1.raw -
2017 Jun 12
0
Gluster deamon fails to start
On Mon, 12 Jun 2017 at 17:40, Langley, Robert <Robert.Langley at ventura.org> wrote: > Thank you for your response. There has been no change of IP addresses. And > I have tried restarting the glusterd service multiple times. > I am using fully qualified names with a DNS server that has forward and > reverse setup. > Something I had noticed is that, with oVirt, communication
2017 Jun 12
2
Gluster deamon fails to start
Thank you for your response. There has been no change of IP addresses. And I have tried restarting the glusterd service multiple times. I am using fully qualified names with a DNS server that has forward and reverse setup. Something I had noticed is that, with oVirt, communication is being attempted over the VM network, rather than storage network. Which is not how the bricks are defined. Not sure
2017 Jun 12
3
Gluster deamon fails to start
On Mon, Jun 12, 2017 at 6:41 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > > On Mon, 12 Jun 2017 at 17:40, Langley, Robert <Robert.Langley at ventura.org> > wrote: > >> Thank you for your response. There has been no change of IP addresses. >> And I have tried restarting the glusterd service multiple times. >> I am using fully qualified names with a
2004 Aug 26
0
[Bug 1670] New: softlink can't be excluded
https://bugzilla.samba.org/show_bug.cgi?id=1670 Summary: softlink can't be excluded Product: rsync Version: 2.6.3 Platform: All OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned@samba.org ReportedBy: johndhendrickson22124@yahoo.com