Displaying 20 results from an estimated 20 matches for "stor1".
Did you mean:
store
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello,
I''m trying to build a replica volume, on two servers.
The servers are: blade6 and blade7. (another blade1 in the peer, but with
no volumes)
The volume seems ok, but I cannot mount it from NFS.
Here are some logs:
[root@blade6 stor1]# df -h
/dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1
[root@blade7 stor1]# df -h
/dev/mapper/gluster_fast 846G 158G 646G 20% /gluster/stor_fast
/dev/mapper/gluster_stor1 882G 72M 837G 1% /gluster/stor1
[root@blade6 stor1]# pwd
/gluster/stor1
[root@blade6 st...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...mmand was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume: volumedisk1
[root at stor1 ~]# gluster volume status volumedisk1 detail
Status of volume: volumedisk1
------------------------------------------------------------------------------
Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1
TCP Port : 49153
RDMA Port : 0
Online :...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
On 28 February 2018 at 03:03, Jose V. Carri?n <jocarbur at g...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
&...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation. This task finished successfully (you can see info below)
> and number of f...
2013 Dec 11
0
wide symlinks across partitions
...i samba list,
I am seeing a peculiar issue when sharing out a directory containing a soft
symlink pointing to a directory outside of the share and a different
filesystem/block device.
i have 2 shares:
[share1]
read only = No
follow symlinks = yes
wide links = yes
/media/stor0/user
[share2]
/media/stor1/user
/media/stor0 is an xfs filesystem
/media/stor1 is an xfs filesystem
there is a folder at /media/stor0/user/cross
there is a softlink at /media/stor1/user/cross -> /media/stor0/user/cross
When a client mounts the partition (Linux and Mac OSX Clients tested) and
attempts to copy a file fro...
2013 Dec 12
0
wide softlink to different partition copy fails
..., to the shared directory.
my smb.conf contains the following:
CODE: SELECT ALL <https://www.centos.org/forums/viewtopic.php?f=13&t=43976#>
[global]
unix extensions = no
[ol01_edit]
path = /media/stor0/user
follow symlinks = yes
wide links = yes
[ol01_ingest]
path = /media/stor1/user
follow symlinks = yes
wide links = yes
The files in question are as follows:
on the server:
/media/stor0 is an XFS filesystem
/media/stor1 is an XFS filesystem
/media/stor0/user/edit_ready is a folder with 775 permissions
/media//stor1/user/edit_ready -> /media/stor0/user/edit_ready...
2017 Jun 09
2
Gluster deamon fails to start
...was initially only going to be used to host the oVirt engine as a hypervisor (it will have Gluster volumes in the near future). I have three other servers I am using for storage. One of those three is also going to be a hypervisor and the other two are dedicated storage servers (Their names are GSA-Stor1 & GSA-Stor2).
When I first deployed the engine, this server (GSAoV07) I'm having an issue with was in green status.
I then added in the other server (I have it as GSAoV08), who is acting as both a Gluster server and a hypervisor. It had the red, downward arrow, for a while after I added it...
2017 Jun 12
0
Gluster deamon fails to start
...gt; going to be used to host the oVirt engine as a hypervisor (it will have
> Gluster volumes in the near future). I have three other servers I am using
> for storage. One of those three is also going to be a hypervisor and the
> other two are dedicated storage servers (Their names are GSA-Stor1 &
> GSA-Stor2).
> When I first deployed the engine, this server (GSAoV07) I?m having an
> issue with was in green status.
> I then added in the other server (I have it as GSAoV08), who is acting as
> both a Gluster server and a hypervisor. It had the red, downward arrow, for
>...
2014 Feb 19
1
Problems with Windows on KVM machine
...Intel 2312WPQJR as a node
Intel R2312GL4GS as a storage with Intel Infiniband 2 ports controller
Infiniband Mellanox SwitchX IS5023 for commutation.
The nodes run CentOS 6.5 with built-in Infiniband package (Linux v0002
2.6.32-431.el6.x86_64), the storage - CentOS 6.4 also built-in drivers
(Linux stor1.colocat.ru 2.6.32-279.el6.x86_64).
On the storage is made an array, that is shown in system as /storage/s01.
Then it is exported via NFS. The nodes connect to NFS by:
/bin/mount -t nfs -o
rdma,port=20049,rw,hard,timeo=600,retrans=5,async,nfsvers=3,intr
192.168.1.1:/storage/s01 /home/storage/sata/0...
2017 Jun 12
2
Gluster deamon fails to start
...was initially only going to be used to host the oVirt engine as a hypervisor (it will have Gluster volumes in the near future). I have three other servers I am using for storage. One of those three is also going to be a hypervisor and the other two are dedicated storage servers (Their names are GSA-Stor1 & GSA-Stor2).
When I first deployed the engine, this server (GSAoV07) I?m having an issue with was in green status.
I then added in the other server (I have it as GSAoV08), who is acting as both a Gluster server and a hypervisor. It had the red, downward arrow, for a while after I added it late...
2017 Jun 12
3
Gluster deamon fails to start
...host the oVirt engine as a hypervisor (it will have
>>> Gluster volumes in the near future). I have three other servers I am using
>>> for storage. One of those three is also going to be a hypervisor and the
>>> other two are dedicated storage servers (Their names are GSA-Stor1 &
>>> GSA-Stor2).
>>> When I first deployed the engine, this server (GSAoV07) I?m having an
>>> issue with was in green status.
>>> I then added in the other server (I have it as GSAoV08), who is acting
>>> as both a Gluster server and a hypervisor....
2017 Jun 12
0
Gluster deamon fails to start
...be used to host the oVirt engine as a hypervisor (it will have
>> Gluster volumes in the near future). I have three other servers I am using
>> for storage. One of those three is also going to be a hypervisor and the
>> other two are dedicated storage servers (Their names are GSA-Stor1 &
>> GSA-Stor2).
>> When I first deployed the engine, this server (GSAoV07) I?m having an
>> issue with was in green status.
>> I then added in the other server (I have it as GSAoV08), who is acting as
>> both a Gluster server and a hypervisor. It had the red, down...
2017 Jun 12
0
Gluster deamon fails to start
...pervisor
>>>> (it will have Gluster volumes in the near future). I have three other
>>>> servers I am using for storage. One of those three is also going to be a
>>>> hypervisor and the other two are dedicated storage servers (Their names are
>>>> GSA-Stor1 & GSA-Stor2).
>>>> When I first deployed the engine, this server (GSAoV07) I?m having an
>>>> issue with was in green status.
>>>> I then added in the other server (I have it as GSAoV08), who is acting
>>>> as both a Gluster server and a hyperviso...
2017 Jun 12
2
Gluster deamon fails to start
...was initially only going to be used to host the oVirt engine as a hypervisor (it will have Gluster volumes in the near future). I have three other servers I am using for storage. One of those three is also going to be a hypervisor and the other two are dedicated storage servers (Their names are GSA-Stor1 & GSA-Stor2).
When I first deployed the engine, this server (GSAoV07) I?m having an issue with was in green status.
I then added in the other server (I have it as GSAoV08), who is acting as both a Gluster server and a hypervisor. It had the red, downward arrow, for a while after I added it late...
2018 May 18
0
glusterfs 3.6.5 and selfheal
...er with replicated volume for qemu-kvm (proxmox)
VM storerage which is mounted using libgfapi module. The servers are
running network with mtu 9000 and client is not (yet).
The question I've got is this:
Is it normal to see this kind of an output: gluster volume heal
HA-100G-POC-PVE info
Brick stor1:/exports/HA-100G-POC-PVE/100G/
/images/100/vm-100-disk-1.raw - Possibly undergoing heal
Number of entries: 1
Brick stor2:/exports/HA-100G-POC-PVE/100G/
/images/100/vm-100-disk-1.raw - Possibly undergoing heal
This happens pretty often but with different disk images on different
replicated volume...