Displaying 20 results from an estimated 3000 matches similar to: "Is there a Regression Test cycle exist?"
2009 Feb 03
1
Some question of DHT Translator?
Does DHT Translator all files under the same directory will be store on the
same node?
It sounds terrible.
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090203/ebbdc7a5/attachment.html>
2009 Jun 24
2
Limit of Glusterfs help
HI:
Was there a limit of servers which was used as storage in Gluster ?
2009-06-24
eagleeyes
???? gluster-users-request
????? 2009-06-24 03:00:42
???? gluster-users
???
??? Gluster-users Digest, Vol 14, Issue 34
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
2012 Jan 27
0
Current tech overview?
Is there a technical overview of how GlusterFS distributes files across
nodes, which is up-to-date for 3.2?
I have found quite a lot of older documentation, from which I see:
* "Unify"
- it needs a shared namespace brick?
- there is a choice of schedulers
* "DHT"
- can't find out much about this; no matches for "dht" in the wiki
- presumably uses a
2013 Mar 05
1
memory leak in 3.3.1 rebalance?
I started rebalancing my 25x2 distributed-replicate volume two days ago.
Since then, the memory usage of the rebalance processes has been
steadily climbing by 1-2 megabytes per minute. Following
http://gluster.org/community/documentation/index.php/High_Memory_Usage,
I tried "echo 2 > /proc/sys/vm/drop_caches". This had no effect on the
processes' memory usage. Some of the
2017 Oct 02
1
nfs-ganesha locking problems
Hi Soumya,
what I can say so far:
it is working on a standalone system but not on the clustered system
from reading the ganesha wiki I have the impression that it is
possible to change the log level without restarting ganesha. I was
playing with dbus-send but so far was unsuccessful. if you can help me
with that, this would be great.
here some details about the tested machines. the nfs client
2011 Apr 07
0
TCP connection incresement when reconfigure and question about multi-graph
Hi, all.
I set up a dht system, and sent a HUP signal to client to trigger the reconfiguration.
But i found that the TCP connection established increased by the number
of bricks(the number of glusterfsd progress).
$ ps -ef | grep glusterfs
root 8579 1 0 11:28 ? 00:00:00 glusterfsd -f /home/huz/dht/server.vol -l /home/huz/dht/server.log -L TRACE
root 8583 1 0 11:28 ?
2009 Jun 26
0
Error when expand dht model volumes
HI all:
I met a problem in expending dht volumes, i write in a dht storage directory untile it grew up to 90%,so i add four new volumes into the configur file.
But when start again ,the data in directory some disappeared ,Why ??? Was there a special action before expending the volumes?
my client cofigure file is this :
volume client1
type protocol/client
option transport-type
2009 May 11
1
Problem of afr in glusterfs 2.0.0rc1
Hello:
i had met the problem twice when i copy some files into the GFS space .
i have five clients and two servers , when i copy files into /data which was GFS space on client A , the problem was appear.
in the same path , A server can see the all files ,but B and C or D couldin't see the all files ,liks some files was missing ,but when i mount again ,the files was appear
2018 Apr 30
0
Gluster rebalance taking many years
I met a big problem,the cluster rebalance takes a long time after adding a
new node
gluster volume rebalance web status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- -----------
2017 Aug 17
0
URGENT: Update issues from 3.6.6 to 3.10.2 Accessing files via samba come up with permission denied
Trying to revive this old thread as problems continue. I have noticed
from the gluster logs the following on m y volume called export:
[2017-08-16 20:08:47.663908] I [MSGID: 109066]
[dht-rename.c:1608:dht_rename] 0-export-dht: renaming
/projects/ACTIVE/Automotive/JEEP/Brand Image Program June
2016/04_Western Region/Huntington Beach CDJR - Huntington Beach, CA/04
REVIT AND CAD/2017-08-16 CAD dwgs
2018 Apr 30
0
Gluster rebalance taking many years
Hi,
This value is an ongoing rough estimate based on the amount of data
rebalance has migrated since it started. The values will cange as the
rebalance progresses.
A few questions:
1. How many files/dirs do you have on this volume?
2. What is the average size of the files?
3. What is the total size of the data on the volume?
Can you send us the rebalance log?
Thanks,
Nithya
On 30
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally
Through df -i I got the approximate number of files is 63694442
[root at CentOS-73-64-minimal ~]# df -i
Filesystem Inodes IUsed IFree IUse%
Mounted on
/dev/md2 131981312 30901030 101080282 24% /
devtmpfs 8192893 435 8192458 1%
/dev
tmpfs
2018 Apr 30
2
Gluster rebalance taking many years
2013 Feb 26
0
Replicated Volume Crashed
Hi,
I have a gluster volume that consists of 22Bricks and includes a single
folder with 3.6 Million files. Yesterday the volume crashed and turned out
to be completely unresposible and I was forced to perform a hard reboot on
all gluster servers because they were not able to execute a reboot command
issued by the shell because they were that heavy overloaded. Each gluster
server has 12 CPU cores
2018 Apr 23
0
Gluster + NFS-Ganesha Failover
Hello All,
I am trying to setup a three way replicated Gluster Storage which is
exported by NFS Ganesha.
This 3 node Ganesha cluster is managed by pacemaker and corosync. I want
to use this cluster as a backend for several different web-based
applications as well as storage for mailboxes.
The cluster is working well but after triggering the failover by
stopping the ganesha service on one node,
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Hi,
Sorry i did't confirm the results sooner.
Yes, it's working fine without issues for me.
If anyone else can confirm so we can be sure it's 100% resolved.
--
Respectfully
Mahdi A. Mahdi
________________________________
From: Krutika Dhananjay <kdhananj at redhat.com>
Sent: Tuesday, June 6, 2017 9:17:40 AM
To: Mahdi Adnan
Cc: gluster-user; Gandalf Corvotempesta; Lindsay
2013 Oct 14
1
Error while running make on Linux machine - unable to install glusterfs
I am trying to install glusterfs on a Linux machine.
glusterfs version 3.3.2
*uname -orm*
*GNU/Linux 2.6.22-6.4.3-amd64-2527508 x86_64*
*
*
Ran* **./configure --prefix=mytempdir*
*
*
No errors reported here.
When I ran make, get this error. Any help appreciated. I am a newbie.
Thanks,
CR
*
make --no-print-directory --quiet all-recursive
Making all in argp-standalone
Making all in .
Making all
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Any additional tests would be great as a similiar bug was detected and
fixed some months ago and after that, this bug arose?.
Is still unclear to me why two very similiar bug was discovered in two
different times for the same operation
How this is possible?
If you fixed the first bug, why the second one wasn't triggered on your
test environment?
Il 6 giu 2017 10:35 AM, "Mahdi
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
> scritto:
>
>> Thanks for that update.
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/11/2018 11:54 AM, Alex K wrote:
Hey Guy's,
Returning to this topic, after disabling the the quorum:
cluster.quorum-type: none
cluster.server-quorum-type: none
I've ran into a number of gluster errors (see below).
I'm using gluster as the backend for my NFS storage. I have gluster
running on two nodes, nfs01 and nfs02. It's mounted on /n on each host.
The path /n is