Displaying 20 results from an estimated 5542 matches for "vol".
Did you mean:
val
2018 Feb 21
2
Geo replication snapshot error
...s:
http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
Both master and slave clusters are replicated with just two
machines (VM) and no arbiter.
I have setup a geo-user (called geouser) and do not use
root as the geo user, as specified in the docs.
Both my master and slave volumes are named: vol
If I pause the geo-replication with:
gluster volume geo-replication vol geouser at ggluster1-geo::vol pause
Pausing geo-replication session between vol & geouser at ggluster1-geo::vol has been successful
Create a snapshot:
gluster snapshot create my_snap_no_1000 vol
snapsho...
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed for <file>(gfid:edaf97e1-02e0-
4838-9d26-71ea3aab22fb)
[2017-10-19 17:52:07.525110] E [MSGID: 109011]
[dht-common.c:7188:dht_create] 0-<vol>...
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Pr...
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get <vol> cluster.watermark-low...
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option Value
------...
2018 Feb 21
0
Geo replication snapshot error
...Administrator%20Guide/Geo%20Replication/
>
> Both master and slave clusters are replicated with just two
> machines (VM) and no arbiter.
>
> I have setup a geo-user (called geouser) and do not use
> root as the geo user, as specified in the docs.
>
> Both my master and slave volumes are named: vol
>
> If I pause the geo-replication with:
> gluster volume geo-replication vol geouser at ggluster1-geo::vol pause
> Pausing geo-replication session between vol & geouser at ggluster1-geo::vol
> has been successful
>
> Create a snapshot:
> gluster snaps...
2017 Oct 27
0
gluster tiering errors
...und
it a bit ambiguous w.r.t. state of affairs.
--
Milind
On Tue, Oct 24, 2017 at 11:34 PM, Herb Burnswell <
herbert.burnswell at gmail.com> wrote:
> Milind - Thank you for the response..
>
> >> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
> Option Value
>
> ------ -----
>
> cluster.watermark-hi 90
>
>
> # gluster volume get <vol> cluster.watermark-low
> Option...
2013 Mar 07
4
[Gluster-devel] glusterfs-3.4.0alpha2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha2/
SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.tar.gz
This release is made off jenkins-release-19
-- Gluster Build System
_______________________________________________
Gluster-devel mailing list
Gluster-devel at nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem Size Used Avail Use% Mou...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor1data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor2 ~]# df -h
> File...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I only have files o...
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
On 28 February 2018 at 03:03, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi,
>
> Some days ago all my glusterfs...
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume: volumedisk1
[root at stor1 ~]# gluster volume status volumedisk1 detail
Status of volume: volumedisk1
-------...
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the
following issue?
Thanks!
Bug 1549714 - On sharded tiered volume, only first shard of new file
goes on hot tier.
https://bugzilla.redhat.com/show_bug.cgi?id=1549714
On sharded tiered volume, only first shard of new file goes on hot tier.
On a sharded tiered volume, only the first shard of a new file
goes on the hot tier, the rest are written to the cold tie...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation. This task finished successfully (you can see info below)
> and number of files on the 3 nodes were very similar .
>
> For vo...
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
-------...
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1: glt01:/vol/vol0
Brick2: glt02:/vol/vol0
Brick3: glt05:/vol/vol0 (arbiter)
Brick4: glt03:/vol/vol0
Brick5: glt04:/vol/vol0
Brick6: glt06:/vol/vol0 (arbiter)
Volume Name: vol1
Distributed-Replicate
Number of Bricks...
2008 Jun 01
2
optim error - repost
Here is a clean version. I did this with nls and it works (see below), but
I need to do it with optim. Keun-Hyung
# optim
vol<-rep(c(0.03, 0.5, 2, 4, 8, 16, 32), 3)
time<-rep(c(2,4,8),each=7)
p.mated<-c(0.47, 0.48, 0.43, 0.43, 0.26, 0.23, NA, 0.68, 0.62, 0.64, 0.58,
0.53, 0.47,
0.24, 0.8, 0.79, 0.71, 0.56, 0.74, 0.8, 0.47)
eury<-data.frame(vol=vol, time=time, p.mated=p.mated)
eury<-na.omit(eury); eury
p0&l...
2019 Nov 03
4
Recent inability to view long filenames stored with scp via samba mount
...s morning. The scp command works flawlessly as ever.
$ bin/rsync_ldslnx01rhythmbox.sh
sending incremental file list
sent 268 bytes received 15 bytes 188.67 bytes/sec
total size is 1,467,438 speedup is 5,185.29
sending incremental file list
Alfred Brendel/
Alfred Brendel/Beethoven Piano Sonatas Vol I/
Alfred Brendel/Beethoven Piano Sonatas Vol I/Disc 1 - 01 - Piano Sonata No. 29 in B-flat major, op. 106 "Hammerklavier": I. Allegro.mp3
12,279,216 100% 13.53MB/s 0:00:00 (xfr#1, ir-chk=1025/1221)
Alfred Brendel/Beethoven Piano Sonatas Vol I/Disc 1 - 02 - Piano Sonata No. 29...
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team,
**Please respond to me and my coworker listed in the Cc, since neither
one of us are on this alias**
QUICK PROBLEM DESCRIPTION:
Cu created a dataset which contains all the zvols for a particular
zone. The zone is then given access to all the zvols in the dataset
using a match statement in the zoneconfig (see long problem description
for details). After the initial boot of the zone everything appears
fine and the localzone zvol dev files match the globalzone zvol dev
fil...