Displaying 20 results from an estimated 1000 matches similar to: "Announcing Glustered 2018 in Bologna (IT)"
2018 Feb 28
1
Glustered 2018 schedule
Today we published the program for the "Glustered 2018" meeting (Bologna,
Italy, 2018-03-08)
Hope to see some of you here.
http://www.incontrodevops.it/events/glustered-2018/
Ivan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180228/cd0912c8/attachment.html>
2018 Feb 01
0
Gluster Monthly Newsletter, January 2018
Gluster Monthly Newsletter, January 2018
4.0 is coming!
We?re currently tracking for a 4.0 release at the end of February, which
means our next edition will be all about 4.0!
This weekend, we have a busy schedule at FOSDEM with a Software Defined
Storage DevRoom on Sunday -
https://fosdem.org/2018/schedule/track/software_defined_storage/ with
Gluster-4.0 and GD2 - Learn what's in
2018 Jan 01
0
Gluster Monthly Newsletter, December 2017
A New Year?s Eve edition of the monthly happenings in Gluster!
Gluster Summit Recordings
If you missed out on Gluster Summit, our recordings are available on our
YouTube channel in our Gluster Summit 2017 Playlist
https://www.youtube.com/playlist?list=PLUjCssFKEMhUSb2CFNvayyVYbTaagcaeE
Gluster Developer Conversations
Our next hangout is on Jan 16, 15:00 UTC!
Want to sign up?
2017 Nov 08
1
BLQ Gluster community meeting anyone?
Hello community,
My company is willing to host a Gluster-community meeting in Bologna
(Italy) on March 8th 2018, back-to-back with Incontro Devops Italia (
http://2018.incontrodevops.it) and in the same venue as the conference.
I think that having 2-3 good technical talk, plus some BOFs/lightning
talks/open-space discussions will make for a nice half-a-day event. It is
also probable that one or
2018 Feb 28
0
Gluster Monthly Newsletter, February 2018
Gluster Monthly Newsletter, February 2018
Special thanks to all of our contributors working to get Gluster 4.0 out
into the wild.
Over the coming weeks, we?ll be posting on the blog about some of the new
improvements coming out in Gluster 4.0, so watch for that!
Glustered: A Gluster Community Gathering is happening on March 8, in
connection with Incontro DevOps 2018. More details here:
2012 Nov 25
1
Error : Error in if (antipodal(p1, p2))
Hey,
I'm trying to build something like this
http://flowingdata.com/2011/05/11/how-to-map-connections-with-great-circles/
but with my own data in csv files.
The code runs well if I use the same csv files as the author, but with mine
, this is what I get
*Code*
library(maps)
library(geosphere)
map("world")
xlim <- c(-180.00, 180.00)
ylim <- c(-90.00, 90.00)
2013 Feb 09
1
R maps Help
I am fairly new to R and am plotting flight data on a map. Everything is
working well except the size of the map is really too small to show the data
effectively and I can't seem to figure out how to make the output map
larger. Do I need to change the device characteristics or is it a map.???
call. Here is the code:
library(maps)
library(geosphere)
airports <-
2018 May 07
0
Gluster Monthly Newsletter, April 2018
Announcing mountpoint, August 27-28, 2018
Our inaugural software-defined storage conference combining Gluster,
Ceph and other projects! More details at:
http://lists.gluster.org/pipermail/gluster-users/2018-May/034039.html
CFP at: http://mountpoint.io/
Out of cycle updates for all maintained Gluster versions: New updates
for 3.10, 3.12 and 4.0
2018 May 31
0
Gluster Monthly Newsletter, May 2018
Announcing mountpoint, August 27-28, 2018
Our inaugural software-defined storage conference combining Gluster,
Ceph and other projects! More details at:
http://lists.gluster.org/pipermail/gluster-users/2018-May/034039.html
CFP at: http://mountpoint.io/ - closes June 15
Gluster Summit Videos - All our available videos (and slides) from
Gluster Summit 2017 are up! Check out the
2023 Mar 21
1
How to configure?
Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports lots of files quite quickly but does
not spawn any glfsheal process. And neither does restarting glusterd.
Is there some way to selectively run glfsheal to fix one brick at a time?
Diego
Il 21/03/2023 01:21,
2023 Mar 24
1
How to configure?
In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
volume file [{from server}, {errno=2}, {error=File o directory non
esistente}]
And *lots* of gfid-mismatch errors in glustershd.log .
Couldn't find anything that would prevent heal to start. :(
Diego
Il 21/03/2023
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful.
Best Regards,Strahil Nikolov?
On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports
2023 Mar 21
1
How to configure?
Theoretically it might help.If possible, try to resolve any pending heals.
Best Regards,Strahil Nikolov?
On Thu, Mar 16, 2023 at 15:29, Diego Zuccato<diego.zuccato at unibo.it> wrote: In Debian stopping glusterd does not stop brick processes: to stop
everything (and free the memory) I have to
systemctl stop glusterd
? killall glusterfs{,d}
? killall glfsheal
? systemctl start
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ?
Best Regards,Strahil Nikolov?
On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including
many files with names related to quorum bricks already moved to a
different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol
that should already have been replaced by
cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist).
Is there something I should check inside the volfiles?
Diego
Il
2023 Mar 16
1
How to configure?
OOM is just just a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal processes.
I'll take the last 5:
root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3267220 0.7 0.0 600292 91964 ?
2006 Oct 10
1
flac improvement??
hello my name is Ludovico Ausiello, i'm a ph.d at the university of Bologna
and I've developped an open source alternative to proprietary philips
superAudioCD encoder (that actually cost some thousands dollars!) that has
better performance (it's seems strange.. but..) I'm interested to use the
flac encoder to compress the 1-bit stream that is the output of my encoder
(I start
2007 Nov 23
1
Bug in pacf -- Proposed patch (PR#10455)
Dear all,
following the thread
http://tolstoy.newcastle.edu.au/R/e2/devel/07/09/4338.html
regarding the bug in the partial autocorrelation function for
multivariate time series.
I have prepared a web page with patches and relevant information.
http://www2.stat.unibo.it/giannerini/R/pacf.htm
Please do not hesitate to contact me for further clarifications
regards
Simone
--
2023 Mar 16
1
How to configure?
Can you restart glusterd service (first check that it was not modified to kill the bricks)?
Best Regards,Strahil Nikolov?
On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal
2023 Mar 15
1
How to configure?
I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q