Displaying 20 results from an estimated 42 matches for "gv1".
Did you mean:
gv
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0 /data/glusterfs/gv1/brick1/brick/.glusterfs-anonymous-inode-d3d1fdec...
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version did?you upgrade and which one is now ??I assume all gluster TSP members (the servers)?have the same...
2023 Jun 30
1
remove_me files building up
...rver back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour, whereby a lot of 'remove_me' files are building up under `/data/glusterfs/gv1/brick2/brick/.shard/.remove_me/` and `/data/glusterfs/gv1/brick3/brick/.shard/.remove_me/`. This is causing the arbiter to run out of space on brick2 and brick3, as the remove_me files are constantly increasing.
brick1 appears to be fine, the disk usage increases throughout the day and drops down...
2023 Jul 03
1
remove_me files building up
...erver back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour, whereby a lot of 'remove_me' files are building up under `/data/glusterfs/gv1/brick2/brick/.shard/.remove_me/` and `/data/glusterfs/gv1/brick3/brick/.shard/.remove_me/`.?This is causing the arbiter to run out of space on brick2 and brick3, as the remove_me files are constantly increasing.
brick1 appears to be fine, the disk usage increases throughout the day and drops down i...
2023 Jul 04
1
remove_me files building up
Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1
meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data =...
2023 Jul 04
1
remove_me files building up
...ore they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79% /data/glusterfs/gv1/brick3
/dev/sdc1 15G 2.8G 13G 19% /data/glusterfs/gv1/brick1
/dev/sde1 15G 14G 1.6G 90% /data/glusterfs/gv1/brick2
And this is the df -hi? output for the bricks on the arb server:
/dev/sdd1 7.5M 2.7M 4.9M 35% /data/glusterfs/gv1/brick3
/dev/sdc1...
2023 Jul 04
1
remove_me files building up
...,Strahil Nikolov?
On Tuesday, July 4, 2023, 2:12 PM, Liam Smith <liam.smith at ek.co> wrote:
#yiv8784601153 P {margin-top:0;margin-bottom:0;}Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1meta-data=/dev/sdc1 ? ? ? ? ? ? ?isize=512 ? ?agcount=31, agsize=131007 blks? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? sectsz=512 ? attr=2, projid32bit=1? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? crc=1 ? ? ? ?finobt=1, sparse=1, rmapbt=0? ? ? ? ?= ? ? ? ? ? ? ? ? ? ? ? reflink=1data ? ? = ? ? ? ? ? ? ? ? ? ? ? b...
2017 Jan 03
2
shadow_copy and glusterfs not working
...store dos attributes = yes
map acl inherit = yes
vfs objects = acl_xattr
server min protocol = SMB2
[gluster]
comment = Daten im Cluster
guest ok = no
read only = no
vfs objects = acl_xattr glusterfs shadow_copy2
glusterfs:volume = gv1
glusterfs:logfile = /var/log/samba/gluster-gv1.log
glusterfs:loglevel = 10
gluster:volfile_server = localhost
kernel share modes = no
path = /win-share
shadow:snapdir = /win-share/.snaps
shadow:basedir = /win-share
shadow:sort = desc...
2017 Jan 04
0
shadow_copy and glusterfs not working
..._glusterfs as
explained in the following thread:
https://lists.samba.org/archive/samba-technical/2016-October/116834.html
If you encounter this issue you can workaround it by setting mountpoint parameter for shadow_copy2
module to / as mentioned in the above link.
> glusterfs:volume = gv1
> glusterfs:logfile = /var/log/samba/gluster-gv1.log
> glusterfs:loglevel = 10
> gluster:volfile_server = localhost
> kernel share modes = no
> path = /win-share
> shadow:snapdir = /win-share/.snaps
> shadow:basedir = /win...
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
...blic/data/outputs/merged/c0a91c500be311e8846eb2f7a7fdd356-video_audio_merge-2/c0a91c500be311e8846eb2f7a7fdd356-vi
deo_join-2.mp4'
I've checked mnt log and seems there are issues with sharding:
[2018-02-07 11:52:36.200554] E [MSGID: 133010]
[shard.c:1724:shard_common_lookup_shards_cbk] 140-gv1-shard: Lookup on
shard 1 failed. Base file gfid = b3a24312-c1fb-4fe0-b11c-0ca264233f62
[Stale file handle]
So this time we started a distributed not-replicated volume with 4 20Gb
bricks. Per your advice to add more storage at a time we were adding 2 more
20Gb bricks each time storage total free s...
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hello Nithya!
Thank you so much, I think we are close to build a stable storage solution
according to your recommendations. Here's our rebalance log - please don't
pay attention to error messages after 9AM - this is when we manually
destroyed volume to recreate it for further testing. Also all remove-brick
operations you could see in the log were executed manually when recreating
volume.
2018 Feb 08
5
self-heal trouble after changing arbiter brick
...cause of I/O load issues. My setup is as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gv0:/data/glusterfs
Brick2: gv1:/data/glusterfs
Brick3: gv4:/data/gv01-arbiter (arbiter)
Brick4: gv2:/data/glusterfs
Brick5: gv3:/data/glusterfs
Brick6: gv1:/data/gv23-arbiter (arbiter)
Brick7: gv4:/data/glusterfs
Brick8: gv5:/data/glusterfs
Brick9: pluto:/var/gv45-arbiter (arbiter)
Options Reconfigured:
nfs.disable: on
transport...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...4015-ae34-04e8bf31fd4f>
...
And so forth. Out of 80k+ lines, less than just 200 are not related to gfids (and yes, number of gfids is well beyond 64999):
# grep -c gfid heal-info.fpack
80578
# grep -v gfid heal-info.myvol
Brick gv0:/data/glusterfs
Status: Connected
Number of entries: 0
Brick gv1:/data/glusterfs
Status: Connected
Number of entries: 0
Brick gv4:/data/gv01-arbiter
Status: Connected
Number of entries: 0
Brick gv2:/data/glusterfs
/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
/testset/05c - Possibly undergoing heal
/testset/b99 - Possibly undergoing heal
/testset/dd7 -...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...uster volume info
>
> Volume Name: myvol
> Type: Distributed-Replicate
> Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x (2 + 1) = 9
> Transport-type: tcp
> Bricks:
> Brick1: gv0:/data/glusterfs
> Brick2: gv1:/data/glusterfs
> Brick3: gv4:/data/gv01-arbiter (arbiter)
> Brick4: gv2:/data/glusterfs
> Brick5: gv3:/data/glusterfs
> Brick6: gv1:/data/gv23-arbiter (arbiter)
> Brick7: gv4:/data/glusterfs
> Brick8: gv5:/data/glusterfs
> Brick9: pluto:/var/gv45-arbiter (arbiter)
> Options...
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.myvol-client-6=0x000000010000000100000000
trusted.bit-rot.version=0x02000000000000005a0d2f6900076620
trusted.gfid=0xe46e9a655128456bba0d98568d432717
root at gv1 ~ # getfattr -d -e hex -m . /data/gv23-arbiter/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file: data/gv23-arbiter/testset/306/30677af808ad578916f54783904e6342.pack
trusted.gfid=0xe46e9a655128456bba0d98568d432717
Is it okay th...
2024 Jan 03
0
Pre Validation failed on 192.168.3.31. Volume gv1 does not exist
...usterd-mgmt-handler.c:321:glusterd_handle_pre_validate_fn]
0-management: Pre Validation failed on operation Add brick
Commands on 192.168.3.31 all report Unable to find volume: gv0/ or
Volume gv0 does not exist
Been fighting this for a while now and very very stuck.? Note: I added a
new volume gv1 and all was good (initially - broke it repo'ing the real
problem)
Volume Name: gv0
Type: Replicate
Volume ID: 6a87dc01-09b5-4040-8db7-18b5dc3808f2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.3.8:/export/gfs/brick
Brick2: 192.168.3...
2013 Apr 30
0
Libvirt and Glusterfs
...orts.insecure yes
option rpc-auth-allow-insecure on
end-volume
I have defined the following line in my domain.xml
<disk type='network' device='disk'>
<driver name='qemu' cache='none'/>
<source protocol='gluster' name='gv1/test.img'>
<host name='127.0.0.1' transport='tcp' />
</source>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>...
2024 Jan 26
1
Gluster communication via TLS client problem
...reated (CN customised)
Then combine all certificates into one and copy them to /usr/lib/ssl/ as
glusterfs.ca to all hosts.
Create the file /var/lib/glusterd/secure-access on the gluster peers.
Gluster volume stopped and glusterd restarted.
Then set the following parameters:
gluster volume set gv1 auth.ssl-allow '*'
gluster volume set gv1 client.ssl on
gluster volume set gv1 server.ssl on
When mounting the volume on the peers, I get the following messages:
-------------------
_64-linux-gnu/libglusterfs.so.0(runner_log+0x100) [0x7ffa11782640] )
0-management: Ran script:
/var/lib/...
2009 May 05
2
problem with ggplot2 boxplot, groups and facets
I have a following problem:
The call
qplot(wg, v.realtime, data=df.best.medians$gv1, colour=sp, geom="boxplot")
works nice: for each value of the wg factor I get two box-plots (two levels in
the sp factor) in different colours, side-by-side, centered at the wg x-axis.
However, I want to separate the data belonging to different levels of the n
factor, so I add the facet...
2007 Jul 20
5
[LLVMdev] Seg faulting on vector ops
...return func;
}
// modified from the fibonacci example
int main(int argc, char **argv)
{
Module* pVectorModule = new Module("test vectors");
Function* pMain = generateVectorAndSelect(pVectorModule);
pVectorModule->print(std::cout);
GenericValue gv1, gv2, gvR;
gv1.FloatVal = 2.0f;
ExistingModuleProvider *pMP = new
ExistingModuleProvider(pVectorModule);
pMP->getModule()->setDataLayout("e-p:32:32:32-i1:8:8:8-i8:8:8:8-i32:32:3
2:32-f32:32:32:32");
ExecutionEngine *pEE = ExecutionEngine::create(pMP, false);...