search for: vols

Displaying 20 results from an estimated 5509 matches for "vols".

Did you mean: vals
2018 Feb 21
2
Geo replication snapshot error
Hi all, I use gluster 3.12 on centos 7. I am writing a snapshot program for my geo-replicated cluster. Now when I started to run tests with my application I have found a very strange behavior regarding geo-replication in gluster. I have setup my geo-replication according to the docs: http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/ Both master and slave clusters are
2017 Oct 19
3
gluster tiering errors
All, I am new to gluster and have some questions/concerns about some tiering errors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first that free disk space is available for the volume. On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote: > Herb, > What are the high and low watermarks for the tier set at ? > > # gluster volume get <vol> cluster.watermark-hi > > # gluster volume get
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option
2018 Feb 21
0
Geo replication snapshot error
Hi, Thanks for reporting the issue. This seems to be a bug. Could you please raise a bug at https://bugzilla.redhat.com/ under community/glusterfs ? We will take a look at it and fix it. Thanks, Kotresh HR On Wed, Feb 21, 2018 at 2:01 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > I use gluster 3.12 on centos 7. > I am writing a snapshot program for my
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2013 Mar 07
4
[Gluster-devel] glusterfs-3.4.0alpha2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha2/ SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.tar.gz This release is made off jenkins-release-19 -- Gluster Build System _______________________________________________ Gluster-devel mailing list Gluster-devel at nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...glusterfs/vol0 /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0 dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1 /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1 [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/* /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 1 /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count 1 /var/lib/glusterd/vols/volumedisk1/...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...db2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0 > dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1 > /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1 > > > [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/* > /var/lib/glusterd/vols/volumedisk1/volumedisk1. > stor1data.mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 1 > /var/lib/glusterd/vols/volumedisk1/volumedisk1. > stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option > shared-brick-count 1 > /va...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...654G 24T 3% /mnt/disk_b2/glusterfs/vol0 >> dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1 >> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1 >> >> >> [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/* >> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data. >> mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 1 >> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data. >> mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose, There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this. The "shared-brick-count" values seem fine on stor1. Please send us "grep -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes so we can check if they are the cause. Regards, Nithya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260 On 28 February 2018 at 03:03, Jose V. Carri?n <jocarbur at gmail.com> wrote: > Hi, > > Some days ago all my glusterfs...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...terfs/vol1/brick1 Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1 Options Reconfigured: cluster.min-free-inodes: 6% performance.cache-size: 4GB cluster.min-free-disk: 1% performance.io-thread-count: 16 performance.readdir-ahead: on [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/* /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 1 /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count 1 /var/lib/glusterd/vols/volumedisk1/volume...
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the following issue? Thanks! Bug 1549714 - On sharded tiered volume, only first shard of new file goes on hot tier. https://bugzilla.redhat.com/show_bug.cgi?id=1549714 On sharded tiered volume, only first shard of new file goes on hot tier. On a sharded tiered volume, only the first shard of a new file goes on the hot tier, the rest
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...nt/disk_b2/glusterfs/vol0 >>> dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1 >>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1 >>> >>> >>> [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/* >>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3: >>> option shared-brick-count 1 >>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt >>> -glusterfs-vol1-brick1.vol.rpmsave:3: option sha...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
.../vol0 >>>> dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1 >>>> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1 >>>> >>>> >>>> [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/* >>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3: >>>> option shared-brick-count 1 >>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt >>>> -glusterfs-vol1-brick1.vol.rpmsave:...
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi, I have a cluster of 10 servers all running Fedora 24 along with Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with Gluster 3.12. I saw the documentation and did some testing but I would like to run my plan through some (more?) educated minds. The current setup is: Volume Name: vol0 Distributed-Replicate Number of Bricks: 2 x (2 + 1) = 6 Bricks: Brick1:
2008 Jun 01
2
optim error - repost
Here is a clean version. I did this with nls and it works (see below), but I need to do it with optim. Keun-Hyung # optim vol<-rep(c(0.03, 0.5, 2, 4, 8, 16, 32), 3) time<-rep(c(2,4,8),each=7) p.mated<-c(0.47, 0.48, 0.43, 0.43, 0.26, 0.23, NA, 0.68, 0.62, 0.64, 0.58, 0.53, 0.47, 0.24, 0.8, 0.79, 0.71, 0.56, 0.74, 0.8, 0.47) eury<-data.frame(vol=vol, time=time, p.mated=p.mated)
2019 Nov 03
4
Recent inability to view long filenames stored with scp via samba mount
Greetings Samba team, It has been a long time since I needed to ask a Samba technical question. Server and workstation are both running the latest Samba packages via Ubuntu 16.04 LTS. I recently applied the security updates... actually that was yesterday I applied them. > samba (2:4.3.11+dfsg-0ubuntu0.16.04.23) xenial-security; urgency=medium > > * SECURITY UPDATE: client code can
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone everything appears fine and the localzone zvol dev files match the globalzone zvol dev file...