Displaying 12 results from an estimated 12 matches for "glustervolum".
Did you mean:
glustervolume
2017 May 30
1
Gluster client mount fails in mid flight with signum 15
...he time of the failures
We?ve searched the inter web but can not find anyone else having the same problem in mid flight
The clients have four mounts of volumes from the same server, all mounts fail simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-...
2013 Dec 17
1
Project pre planning
Hello GlusterFS users,
can anybody give me please his opinion about the following facts and
questions:
4 storage server with 16 SATA bays, connected by GigE:
Q1:
Volume will be set up as distributed-replicated.
Maildir, FTP Dir, htdocs, file store directory => as sub dir's in one big
GlusterVolume or each dir in it's own GlusterVolume?
Q2: Set up the bricks as a collection of JBOD's or underlay it with a RAID-5
array?
Q3: A client mounted the GlusterFS from a nfs export of node 1. What if the
server is down - would be a set up with virtual IP triggered by heartbeat a
solution to p...
2013 Sep 22
2
Problem wit glusterfs-server 3.4
...for the first time and have the following problem:
I want to have two nodes.
On node1 I have a raid1-sytem running in /raid/storage
Both nodes see the other and now I try to create a volume.
While I create the first volume on a fresh system (node1) for the first time, gluster said:
volume create: glustervolume: failed: /raid/storage/ or a prefix of it is already part of a volume
How can it be? It is possible, that the mdadm-raid1-sytem is the reason?
Regards,
Tito
2018 Feb 14
1
Diffrent volumes on diffrent interfaces
Hi,
I run a proxmox system with a glustervolume over three nodes.
I think about setup a second volume, but want to use the other interfaces on
the nodes.
Is this recommended or possible?
Bye
Gregor
2019 Aug 23
2
plenty of vacuuuming processes
..._reduced_name)
? check_reduced_name: check_reduced_name [hildner/.config/menus] [/data/ho]
[2019/08/23 11:29:56.944070, 10, pid=1246, effective(101776, 513),
real(101776, 0), class=vfs] ../source3/smbd/vfs.c:1260(check_reduced_name)
? check_reduced_name realpath [<user>/.config/menus] ->
[/glustervolume/<user>/.config/menus]
[2019/08/23 11:29:56.944092,? 5, pid=1246, effective(101776, 513),
real(101776, 0), class=vfs] ../source3/smbd/vfs.c:1371(check_reduced_name)
? check_reduced_name: <user>/.config/menus reduced to
/glustervolume/<user>/.config/menus
[2019/08/23 11:29:56.94411...
2013 May 12
0
Glusterfs with Infiniband tips
...unning NFS directly from zfs pool gives me a far better performance.
4. Tried various performance related options for glusterfs, but with only small performance increase ((
5. Clients perform horribly when additing new bricks to the cluser. By that I mean over 2 hours to run "time ls -lhR /glustervolume" which contains just 10 files. Basically, mounted fs is completely unusable during this time!
6. Virtual machines with volumes stored on the glusterfs mounted filesystem have an extremely slow performance. I've not managed to get speeds over 50MB/s using cache=none option.
If any of y...
2018 Feb 14
0
Diffrent volumes on diffrent interfaces
Hi,
I run a proxmox system with a glustervolume over three nodes.
I think about setup a second volume, but want to use the other interfaces on
the nodes.
Is this recommended or possible?
Bye
Gregor
2019 Aug 23
2
plenty of vacuuuming processes
Hi,
I have a ctdb cluster with 3 nodes and 3 glusterfs (version 6) nodes up
and running.
I observe plenty of these situations:
A connected Windows-10 client doesn't react anymore. I use forder
redirections.?
- Smbstatus shows up some (auth in progress) processes.
- In the logs of a ctdb node I get:
Aug 23 10:12:29 ctdb-1 ctdbd[2167]: Ending traverse on DB locking.tdb
(id 568831), records
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
...arate disk)
Volume Name: d1
Type: Replicate
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: glusterdev:/b1
Brick2: glusterdev:/b2
Brick3: glusterdev:/b3
Brick4: glusterdev:/b4
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
diagnostics.client-log-level: DEBUG
* My glustervolume is mountet as glusterfs on /d1
glusterdev:/d1 52403200 33024 52370176 1% /d1
1) I put a file on the glusterfs
date >/d1/data.txt
2) Checking the storage on the bricks: ls -l /b1 /b2 /b3 /b4
/b1:
total 8
-rw-r--r-- 1 root root 29 Feb 5 21:40 data.txt
/b2:
total 8
-rw-r--r--...
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
...he time of the failures
We?ve searched the inter web but can not find anyone else having the same problem in mid flight
The clients have four mounts of volumes from the same server, all mounts fail simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-...
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
...he time of the failures
We?ve searched the inter web but can not find anyone else having the same problem in mid flight
The clients have four mounts of volumes from the same server, all mounts fail simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-...
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
...searched the inter web but can not find anyone else having the same problem in mid flight
>
> The clients have four mounts of volumes from the same server, all mounts fail simultaneously
> Peer status looks ok
> Volume status looks ok
> Volume info looks like this:
> Volume Name: GLUSTERVOLUME
> Type: Replicate
> Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
> Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/b...