Displaying 20 results from an estimated 400 matches similar to: "shadow_copy and glusterfs not working"
2017 Jan 04
0
shadow_copy and glusterfs not working
On Tue, 2017-01-03 at 15:16 +0100, Stefan Kania via samba wrote:
> Hello,
>
> we are trying to configure a CTDB-Cluster with Glusterfs. We are using
> Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
> volume to use gluster-snapshots.
> Then we configured the first share without using shadow_copy2 and
> everything was working fine.
>
> Then we
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jun 30
1
remove_me files building up
Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour,
2023 Jul 04
1
remove_me files building up
Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1
meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
=
2023 Jul 03
1
remove_me files building up
Hi,
you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ?
Best Regards,Strahil Nikolov?
On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's
2023 Jul 04
1
remove_me files building up
Hi Strahil,
We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Hi Liam,
I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low.
If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future.
Of course, always
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you for your help on figuring this out!
We changed our configuration and after having a successful test yesterday
we have run into new issue today.
The test including moderate read/write (~20-30 Mb/s) and scaling the
storage was running about 3 hours and at some moment system got stuck:
On the user level there are such errors when trying to work with filesystem:
OSError:
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey,
Did the heal completed and you still have some entries pending heal?
If yes then can you provide the following informations to debug the issue.
1. Which version of gluster you are running
2. gluster volume heal <volname> info summary or gluster volume heal
<volname> info
3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the
which is pending heal from all
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik,
Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks:
root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik,
Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info.
The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks,
I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gv0:/data/glusterfs
Brick2: gv1:/data/glusterfs
Brick3:
2024 Jan 26
1
Gluster communication via TLS client problem
Hi to all,
The system is running Debian 12 with Gluster 10. All systems are using
the same versions.
I try to encrypt the communication between the peers and the clients via
TLS. The encryption between the peers works, but when I try to mount the
volume on the client I always get an error.
What have I done?
1. all hosts and clients can resolve the name of all systems involved.
2. the
2009 May 05
2
problem with ggplot2 boxplot, groups and facets
I have a following problem:
The call
qplot(wg, v.realtime, data=df.best.medians$gv1, colour=sp, geom="boxplot")
works nice: for each value of the wg factor I get two box-plots (two levels in
the sp factor) in different colours, side-by-side, centered at the wg x-axis.
However, I want to separate the data belonging to different levels of the n
factor, so I add the facets option:
2013 Jul 24
1
Libvirt and Glusterfs pool
Hi,
I use the QEMU-GlusterFS native integration (no Fuse mount) with the
libvirt.
Now I create a volume issuing :
# qemu-img create gluster://localhost/gv1/test.img 5G
Then using the libvirt I declare the following lines in my domain.xml :
<disk type='network' device='disk'>
<driver name='qemu' cache='none'/>
<source
2024 Sep 29
1
Growing cluster: peering worked, staging failed
Fellow gluster users,
trying to extend a 3 node cluster that is serving me very reliably for
a long time now.
Cluster is serving two volumes:
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 9bafc4d2-d9b6-4b6d-a631-1cf42d1d2559
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (2 + 1) = 18
Transport-type: tcp
Volume Name: gv1
Type: Replicate
Volume ID:
2011 Oct 06
1
fuse mount disconnecting...
hi,
i am getting regular crashes which result in the mount being dropped:
n1:~ # ls /n/auto/gv1/
ls: cannot access /n/auto/gv1/: Transport endpoint is not connected
client side error log: http://pastebin.com/UgMaLq42
..i am also finding that the gluster severs also sometimes just drop out -
and i need to kill all the server side gluster processes and restart
glusterd. i'm not sure if
2007 Jul 21
0
[LLVMdev] Seg faulting on vector ops
On Fri, 20 Jul 2007, Chuck Rose III wrote:
> I'm looking to make use of the vectorization primitives in the Intel
> chip with the code we generate from LLVM and so I've started
> experimenting with it. What is the state of the machine code generated
> for vectors? In my tinkering, I seem to be getting some wonky machine
> instructions, but I'm most likely just doing
2007 Jul 24
2
[LLVMdev] Seg faulting on vector ops
Hrm. This problem shouldn't be target specific. I am pretty sure
prologue / epilogue inserter aligns stack correctly if there are
stack objects with greater than default stack alignment requirement.
Seems to be the initial alloca() instruction should specify 16 byte
alignment?
Evan
On Jul 21, 2007, at 2:51 PM, Chris Lattner wrote:
> On Fri, 20 Jul 2007, Chuck Rose III wrote: