Displaying 16 results from an estimated 16 matches for "elementai".
Did you mean:
elemental
2017 Nov 16
0
Missing files on one of the bricks
On 15 November 2017 at 19:57, Frederic Harmignies <
frederic.harmignies at elementai.com> wrote:
> Hello, we have 2x files that are missing from one of the bricks. No idea
> how to fix this.
>
> Details:
>
> # gluster volume info
>
> Volume Name: data01
> Type: Replicate
> Volume ID: 39b4479c-31f0-4696-9435-5454e4f8d310
> Status: Started
> Sn...
2017 Nov 15
2
Missing files on one of the bricks
...lready tried:
#gluster heal volume full
Running a stat and ls -l on both files from a mounted client to try and
trigger a heal
Would a re-balance fix this? Any guidance would be greatly appreciated!
Thank you in advance!
--
*Frederic Harmignies*
*High Performance Computer Administrator*
www.elementai.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171115/e0e6598d/attachment.html>
2017 Nov 16
2
Missing files on one of the bricks
On 11/16/2017 04:12 PM, Nithya Balachandran wrote:
>
>
> On 15 November 2017 at 19:57, Frederic Harmignies
> <frederic.harmignies at elementai.com
> <mailto:frederic.harmignies at elementai.com>> wrote:
>
> Hello, we have 2x files that are missing from one of the bricks.
> No idea how to fix this.
>
> Details:
>
> # gluster volume info
> Volume Name: data01
> Type: Replicate...
2017 Nov 16
0
Missing files on one of the bricks
...-b0ba-2dbe4c803c54.
sources=[1] sinks=0
On Thu, Nov 16, 2017 at 7:14 AM, Ravishankar N <ravishankar at redhat.com>
wrote:
>
>
> On 11/16/2017 04:12 PM, Nithya Balachandran wrote:
>
>
>
> On 15 November 2017 at 19:57, Frederic Harmignies <frederic.harmignies@
> elementai.com> wrote:
>
>> Hello, we have 2x files that are missing from one of the bricks. No idea
>> how to fix this.
>>
>> Details:
>>
>> # gluster volume info
>>
>> Volume Name: data01
>> Type: Replicate
>> Volume ID: 39b4479c-31f0-4696-94...
2017 Jun 15
0
Interesting split-brain...
...resolve it by using the *favorite-child-policy*.
Since the file is not important, you can go with deleting that.
[1]
https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/#fixing-directory-entry-split-brain
HTH,
Karthik
On Thu, Jun 15, 2017 at 8:23 AM, Ludwig Gamache <ludwig at elementai.com>
wrote:
> I am new to gluster but already like it. I did a maintenance last week
> where I shutdown both nodes (one after each others). I had many files that
> needed to be healed after that. Everything worked well, except for 1 file.
> It is in split-brain, with 2 different GFI...
2017 Jun 22
0
remounting volumes, is there an easier way
On Tue, Jun 20, 2017 at 7:32 PM, Ludwig Gamache <ludwig at elementai.com>
wrote:
> All,
>
> Over the week-end, one of my volume became unavailable. All clients could
> not access their mount points. On some of the clients, I had user processes
> that we using these mount points. So, I could not do a umount/mount without
> killing these processe...
2017 Jun 15
1
Interesting split-brain...
...licy*.
> Since the file is not important, you can go with deleting that.
>
> [1]
> https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/#fixing-directory-entry-split-brain
>
> HTH,
> Karthik
>
> On Thu, Jun 15, 2017 at 8:23 AM, Ludwig Gamache <ludwig at elementai.com
> <mailto:ludwig at elementai.com>> wrote:
>
> I am new to gluster but already like it. I did a maintenance last
> week where I shutdown both nodes (one after each others). I had
> many files that needed to be healed after that. Everything worked
> wel...
2017 Jun 20
2
remounting volumes, is there an easier way
All,
Over the week-end, one of my volume became unavailable. All clients could
not access their mount points. On some of the clients, I had user processes
that we using these mount points. So, I could not do a umount/mount without
killing these processes.
I also noticed that when I restarted the volume, the port changed on the
server. So, clients that were still using the previous TCP/port could
2017 Jun 15
2
Interesting split-brain...
I am new to gluster but already like it. I did a maintenance last week
where I shutdown both nodes (one after each others). I had many files that
needed to be healed after that. Everything worked well, except for 1 file.
It is in split-brain, with 2 different GFID. I read the documentation but
it only covers the cases where the GFID is the same on both bricks. BTW, I
am running Gluster 3.10.
Here
2017 Oct 24
0
trying to add a 3rd peer
Are you shure about possibility to resolve all node names on all other nodes?
You need to use names used previously in Gluster - check their by ?gluster peer status? or ?gluster pool list?.
Regards,
Bartosz
> Wiadomo?? napisana przez Ludwig Gamache <ludwig at elementai.com> w dniu 24.10.2017, o godz. 03:13:
>
> All,
>
> I am trying to add a third peer to my gluster install. The first 2 nodes are running since many months and have gluster 3.10.3-1.
>
> I recently installed the 3rd node and gluster 3.10.6-1. I was able to start the gluster d...
2017 Oct 24
2
trying to add a 3rd peer
All,
I am trying to add a third peer to my gluster install. The first 2 nodes
are running since many months and have gluster 3.10.3-1.
I recently installed the 3rd node and gluster 3.10.6-1. I was able to start
the gluster daemon on it. After, I tried to add the peer from one of the 2
previous server (gluster peer probe IPADDRESS).
That first peer started the communication with the 3rd peer. At
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled.
Ludwig
On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote:
> Do you have sharding enabled ? If yes, don't do it.
> If no I'll let someone who knows better answer you :)
>
> On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> > All,
> >
> > We currently have a Gluster installation which is made of 2
2010 Jul 06
0
"Mounts without paths are not usable"
Good day to you.
I''ve updated my puppetmaster and clients to 0.25.x and also updated
file serving paths to include "/modules/". I also use foreman to
monitor puppetd reports, after update there are sporadically emerging
errors :
"Failed to retrieve current state of resource: Mounts without paths
are not usable Could not describe /modules/puppet-client/
2017 Jun 16
1
emptying the trash directory
All,
I just enabled the trashcan feature on our volumes. It is working as
expected. However, I can't seem to find the rules to empty the trashcan. Is
there any automated process to do that? If so, what are the configuration
features?
Regards,
Ludwig
--
Ludwig Gamache
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Sep 25
2
Adding bricks to an existing installation.
All,
We currently have a Gluster installation which is made of 2 servers. Each
server has 10 drives on ZFS. And I have a gluster mirror between these 2.
The current config looks like:
SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
I now need to add more space and a third server. Before I do the changes, I
want to know if this is a supported config. By adding a third server, I
simply want to
2017 Jun 20
2
trash can feature, crashed???
All,
I currently have 2 bricks running Gluster 3.10.1. This is a Centos
installation. On Friday last week, I enabled the trashcan feature on one of
my volumes:
gluster volume set date01 features.trash on
I also limited the max file size to 500MB:
gluster volume set data01 features.trash-max-filesize 500MB
3 hours after that I enabled this, this specific gluster volume went down:
[2017-06-16