Displaying 10 results from an estimated 10 matches for "avatat".
Did you mean:
avatar
2017 Nov 08
0
Adding a slack for communication?
It's great idea! :)
But think about creating Slack for all RedHat provided opensource
projects. For example one Slack workspace with separated Gluster, Ceph,
Fedora etc. channels.
I can't wait for it!
Bartosz
On 08.11.2017 22:22, Amye Scavarda wrote:
> From today's community meeting, we had an item from the issue queue:
> https://github.com/gluster/community/issues/13
2017 Oct 10
0
ZFS with SSD ZIL vs XFS
Last time I've read about tiering in gluster, there wasn't any performance
gain with VM workload and more over doesn't speed up writes...
Il 10 ott 2017 9:27 PM, "Bartosz Zi?ba" <kontakt at avatat.pl> ha scritto:
> Hi,
>
> Have you thought about using an SSD as a GlusterFS hot tiers?
>
> Regards,
> Bartosz
>
>
> On 10.10.2017 19:59, Gandalf Corvotempesta wrote:
>
>> 2017-10-10 18:27 GMT+02:00 Jeff Darcy <jeff at pl.atyp.us>:
>>
>>>...
2018 Jan 19
2
Backup Solutions for GlusterFS
Hi,
What is the backup solutions for glusterfs? Does glusterfs suppors any backup solutions.
Sincerely,Kadir
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180119/07170244/attachment.html>
2018 Jan 19
0
Backup Solutions for GlusterFS
Probably all file-level backups? ;)
Rsync is the simplest option.
Regards,
Bartosz
> On 19 Jan 2018, at 08:27, Kadir <qkadir at yahoo.com> wrote:
>
> Hi,
>
> What is the backup solutions for glusterfs? Does glusterfs suppors any backup solutions.
>
> Sincerely,
> Kadir
> _______________________________________________
> Gluster-users mailing list
>
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD?
Regards,
Bartosz
> Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47:
>
> Hi gluster users,
> I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for
2017 Oct 24
0
trying to add a 3rd peer
Are you shure about possibility to resolve all node names on all other nodes?
You need to use names used previously in Gluster - check their by ?gluster peer status? or ?gluster pool list?.
Regards,
Bartosz
> Wiadomo?? napisana przez Ludwig Gamache <ludwig at elementai.com> w dniu 24.10.2017, o godz. 03:13:
>
> All,
>
> I am trying to add a third peer to my gluster
2017 Oct 24
2
trying to add a 3rd peer
All,
I am trying to add a third peer to my gluster install. The first 2 nodes
are running since many months and have gluster 3.10.3-1.
I recently installed the 3rd node and gluster 3.10.6-1. I was able to start
the gluster daemon on it. After, I tried to add the peer from one of the 2
previous server (gluster peer probe IPADDRESS).
That first peer started the communication with the 3rd peer. At
2017 Nov 08
9
Adding a slack for communication?
>From today's community meeting, we had an item from the issue queue:
https://github.com/gluster/community/issues/13
Should we have a Gluster Community slack team? I'm interested in
everyone's thoughts on this.
- amye
--
Amye Scavarda | amye at redhat.com | Gluster Community Lead
2017 Oct 27
5
Poor gluster performance on large files.
Hi gluster users,
I've spent several months trying to get any kind of high performance out of
gluster. The current XFS/samba array is used for video editing and
300-400MB/s for at least 4 clients is minimum (currently a single windows
client gets at least 700/700 for a single client over samba, peaking to 950
at times using blackmagic speed test). Gluster has been getting me as low
as
2017 Jun 22
0
Disable write-back in tiering volume
Hi all!
I made GlusterFS volume based on few HDD at few servers and I wanted to add SSD based tiering. Write-back caching is problem for this case, because SSD are in only one server, so its failure results in failure of all tier and data loss.
I want to disable write-back caching and write from client nodes directly to HDD volume (cold tier). How can I do that?
My cluster is based on GlusterFS