Displaying 20 results from an estimated 2000 matches similar to: "Replicated volume read request are served by remote brick"
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello,
We have a very fresh gluster 3.10.10 installation.
Our volume is created as distributed volume, 9 bricks 96TB in total
(87TB after 10% of gluster disk space reservation)
For some reasons I can?t ?heal? the volume:
# gluster volume heal gv0
Launching heal operation to perform index self heal on volume gv0 has
been unsuccessful on bricks that are down. Please check if all brick
processes
2017 Jul 18
1
Sporadic Bus error on mmap() on FUSE mount
On 18.7.2017 12:17, Niels de Vos wrote:
> On Tue, Jul 18, 2017 at 10:48:45AM +0200, Jan Wrona wrote:
>> Hi,
>>
>> I need to use rrdtool on top of a Gluster FUSE mount, rrdtool uses
>> memory-mapped file IO extensively (I know I can recompile rrdtool with
>> mmap() disabled, but that is just a workaround). I have three FUSE mount
>> points on three different
2009 May 19
1
nufa and missing files
We're using gluster-2.0.1 and a nufa volume comprised of thirteen
subvolumes across thirteen hosts.
We've found today that there are some files in the local filesystem
associated with the subvolume from one of the hosts that are not
being seen in the nufa volume on any gluster client.
I don't know how or when this happened, but now we have to do some
work to get this gluster volume
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
Hi Anatoliy,
The heal command is basically used to heal any mismatching contents between
replica copies of the files.
For the command "gluster volume heal <volname>" to succeed, you should have
the self-heal-daemon running,
which is true only if your volume is of type replicate/disperse.
In your case you have a plain distribute volume where you do not store the
replica of any
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi,
Maybe someone can point me to a documentation or explain this? I can't
find it myself.
Do we have any other useful resources except doc.gluster.org? As I see
many gluster options are not described there or there are no explanation
what is doing...
On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
>
> We have a very fresh gluster 3.10.10 installation.
> Our volume
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org>
wrote:
> Hi Karthik,
>
>
> Thanks a lot for the explanation.
>
> Does it mean a distributed volume health can be checked only by "gluster
> volume status " command?
>
Yes. I am not aware of any other command which can give the status of plain
distribute volume which is similar to
2012 Nov 14
3
Using local writes with gluster for temporary storage
Hi,
We have a cluster with 130 compute nodes with an NAS-type
central storage under gluster (3 bricks, ~50TB). When we
run large number of ocean models we can run into bottlenecks
with many jobs trying to write to our central storage.
It was suggested to us that we could also used gluster to
unite the disks on the compute nodes into a single "disk"
in which files would be written
2017 Jul 18
2
Sporadic Bus error on mmap() on FUSE mount
Hi,
I need to use rrdtool on top of a Gluster FUSE mount, rrdtool uses
memory-mapped file IO extensively (I know I can recompile rrdtool with
mmap() disabled, but that is just a workaround). I have three FUSE mount
points on three different servers, on one of them the command "rrdtool
create test.rrd --start 920804400 DS:speed:COUNTER:600:U:U
RRA:AVERAGE:0.5:1:24" works fine, on
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume
type first?
Cheers,
Laura B
On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com>
wrote:
> Hi Anatoliy,
>
> The heal command is basically used to heal any mismatching contents
> between replica copies of the files.
> For the command "gluster volume heal <volname>"
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik,
Thanks a lot for the explanation.
Does it mean a distributed volume health can be checked only by "gluster
volume status " command?
And one more question: cluster.min-free-disk is 10% by default. What
kind of "side effects" can we face if this option will be reduced to,
for example, 5%? Could you point to any best practice document(s)?
Regards,
Anatoliy
2017 Jul 18
0
Sporadic Bus error on mmap() on FUSE mount
On Tue, Jul 18, 2017 at 10:48:45AM +0200, Jan Wrona wrote:
> Hi,
>
> I need to use rrdtool on top of a Gluster FUSE mount, rrdtool uses
> memory-mapped file IO extensively (I know I can recompile rrdtool with
> mmap() disabled, but that is just a workaround). I have three FUSE mount
> points on three different servers, on one of them the command "rrdtool
> create
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya <ksubrahm at redhat.com>
wrote:
>
>
> On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org>
> wrote:
>
>> Hi Karthik,
>>
>>
>> Thanks a lot for the explanation.
>>
>> Does it mean a distributed volume health can be checked only by "gluster
>> volume
2008 Oct 17
6
GlusterFS compared to KosmosFS (now called cloudstore)?
Hi.
I'm evaluating GlusterFS for our DFS implementation, and wondered how it
compares to KFS/CloudStore?
These features here look especially nice (
http://kosmosfs.sourceforge.net/features.html). Any idea what of them exist
in GlusterFS as well?
Regards.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2008 Oct 15
1
Glusterfs performance with large directories
We at Wiseguys are looking into GlusterFS to run our Internet Archive.
The archive stores webpages collected by our spiders.
The test setup consists of three data machines, each exporting a volume
of about 3.7TB and one nameserver machine.
File layout is such that each host has it's own directory, for example the
GlusterFS website would be located in:
2019 Jun 08
2
Does CentOS support aspell?
I haven't run CentOS on a machine of my own for several years;
but my domain (NOT the address I post from) is hosted on a machine
running CentOS. The list for the mailer I run recommends using aspell,
which is not installed (according to rpm -q) on the remote host, as a
spellchecker.
Does anybody here know offhand if CentOS supports it? Or how do I
check?
--
Beartooth Staffwright, Not
2017 Sep 03
3
Poor performance with shard
Hey everyone!
I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet
connection.
The storage is configured with 3 gluster volumes, every volume has 12
bricks (4 bricks on every server, 1 per ssd in the server).
With the 'features.shard' off option my writing speed (using the 'dd'
command) is approximately 250 Mbs and when the feature is on the writing
speed is
2018 Jan 30
2
Tiered volume performance degrades badly after a volume stop/start or system restart.
I am fighting this issue:
Bug 1540376 ? Tiered volume performance degrades badly after a
volume stop/start or system restart.
https://bugzilla.redhat.com/show_bug.cgi?id=1540376
Does anyone have any ideas on what might be causing this, and
what a fix or work-around might be?
Thanks!
~ Jeff Byers ~
Tiered volume performance degrades badly after a volume
stop/start or system restart.
The
2017 Nov 08
2
glusterfs brick server use too high memory
Hi all,
I'm glad to add glusterfs community.
I have a glusterfs cluster:
Nodes: 4
System: Centos7.1
Glusterfs: 3.8.9
Each Node:
CPU: 48 core
Mem: 128GB
Disk: 1*4T
There is one Distributed Replicated volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node.
Then, I reboot the glusterfsd
2017 Nov 09
0
glusterfs brick server use too high memory
On 8 November 2017 at 17:16, Yao Guotao <yaoguo_tao at 163.com> wrote:
> Hi all,
> I'm glad to add glusterfs community.
>
> I have a glusterfs cluster:
> Nodes: 4
> System: Centos7.1
> Glusterfs: 3.8.9
> Each Node:
> CPU: 48 core
> Mem: 128GB
> Disk: 1*4T
>
> There is one Distributed Replicated volume. There are ~160 k8s pods as
> clients
2018 Jan 31
1
Tiered volume performance degrades badly after a volume stop/start or system restart.
Tested it in two different environments lately with exactly same results.
Was trying to get better read performance from local mounts with
hundreds of thousands maildir email files by using SSD,
hoping that .gluster file stat read will improve which does migrate
to hot tire.
After seeing what you described for 24 hours and confirming all move
around on the tires is done - killed it.
Here are my