Displaying 20 results from an estimated 22 matches for "volgen".
Did you mean:
polgen
2018 Apr 04
2
glusterd2 problem
...log loglevel=debug noembed=false
peeraddress="192.168.222.24:24008" rundir=/usr/var/run/glusterd2
source="[config.go:129:main.dumpConfigToLog]" statedump=true
version=false workdir=/etc/glusterd2
DEBU[2018-04-04 09:28:17.067244] loading templates
source="[template.go:64:volgen.LoadTemplates]" templatesdir=
DEBU[2018-04-04 09:28:17.068207] generated default templates
source="[template.go:82:volgen.LoadTemplates]" templates="[brick.graph
distreplicate.graph fuse.graph distribute.graph replicate.graph
disperse.graph]"
DEBU[2018-04-04 09:28:17.068...
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users
01 | 02 mirrored --|
03 | 04 mirrored --| distributed
05 | 06 mirrored --|
1) Would this command work for that?
glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01
clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01
clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01
clustr-06:/mnt/data01
So the 'repstore1' is the distributed part, and within that are 3 sets
of mirrored nodes.
2) Then, since we'r...
2018 Apr 06
0
glusterd2 problem
...t; noembed=false peeraddress="192.168.222.24:24008"
> rundir=/usr/var/run/glusterd2 source="[config.go:129:main.dumpConfigToLog]"
> statedump=true version=false workdir=/etc/glusterd2
> DEBU[2018-04-04 09:28:17.067244] loading templates
> source="[template.go:64:volgen.LoadTemplates]" templatesdir=
> DEBU[2018-04-04 09:28:17.068207] generated default templates
> source="[template.go:82:volgen.LoadTemplates]" templates="[brick.graph
> distreplicate.graph fuse.graph distribute.graph replicate.graph
> disperse.graph]"
> DEBU[2...
2009 Oct 23
0
No glusterfs-volgen executable
Hi list
I just compiled and installed Glusterfs on x86 according to the
official documentation. It seems glusterfs is installed (I can run it
and it tells the version - 2.0.7 in my case) but there is no
glusterfs-volgen tool. What do I need to do to get this?
Thanks
Daniel
2017 Aug 02
0
[Update] GD2 - what's been happening
...xlator developers will not need any changes in GD2 to add
new options to their xlator. What this also means is that we will
require some changes to the xlator options table to add some
information that used to be available in the GD options table. We will
be detailing the required changes soon.
- Volgen [6]
I've been working on a getting a volgen package and framework ready.
We had a very ambitious design earlier [7] involving a dynamic graph
generator with dependency resolution. Work was done on this long back
[8], but was stopped as it turned out to be too complex. The new plan
is much sim...
2010 Mar 04
1
[3.0.2] booster + unfsd failed
...9.23 (C) 2009, Pascal Schmidt <unfs3-server at ewetel.net>
realpath for /nfs/NAS failed
syntax error in '/etc/unfs3exports', exporting nothing
root at ccc1:~/gluster/unfs3-0.9.23booster0.5# cat
/usr/local/etc/glusterfs/glusterfsd.vol
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen -n NAS 192.168.1.127:/export
192.168.1.128:/export --nfs --cifs
volume posix1
type storage/posix
option directory /export
end-volume
volume locks1
type features/locks
subvolumes posix1
end-volume
volume brick1
type perf...
2011 Feb 02
2
Gluster 3.1.2 and rpc-auth patch
...ges
After browsing the web/mailing list and trying to find a workaround to implement nfs auth we decided to patch the source code to add an extra option to the gluster "volume set" framwork. Which was a rather easy task considering the quality of the source code.
A few lines in glusterd-volgen.c did the trick
It worked for us, so here is the patch which allow users to issue:
gluster volume set MyVolume rpc-auth.allow "10.*,192.*"
default is still "*"
Cheers
--
Benjamin Cleyet-Marrel
Directeur de l'ing?nierie
Open Wide Outsourcing
http://outsourcing.openwide....
2011 Jan 13
0
distribute-replicate setup GFS Client crashed
...usterfs.so.0(event_dispatch+0x21)[0xf7770a21]
glusterfsc(main+0x48c)[0x804c45c]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xdc)[0xf75d718c]
glusterfsc[0x804a631]
The config files are attached.
tx
Vikas
-------------- next part --------------
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen --name gfs 172.24.0.68:/ghostcache/home/hsawhney/gfs/ 172.24.0.222:/ghostcache/home/hsawhney/gfs/
volume posix1
type storage/posix
option directory /ghostcache/gfs-export/
end-volume
volume locks1
type features/locks
subvolu...
2010 May 04
1
Glusterfs and Xen - mount option
...nstalled Gluster Storage Platform on two servers (mirror) and mounted
the volume on a client.
I had to mount it using "mount -t glusterfs
<MANAGEMENT_SERVER>:<VOLUMENAME-ON-WEBGUI>-tcp <MOUNTPOINT>" because I
didn't had a vol file and couldn't generate one with volgen because we don't
have shell access to the gluster server right?
How can I add the option to mount it with --disable-direct-io-mode?
I can't find how to do it.
Do you recommend using Xen with gluster? How about performance issues?
Best Regards
Paulo Cardoso
2010 Apr 22
1
Transport endpoint not connected
...9;ve recently implemented gluster to share webcontent read-write between
two servers.
Version : glusterfs 3.0.4 built on Apr 19 2010 16:37:50
Fuse : 2.7.2-1ubuntu2.1
Platform : ubuntu 8.04LTS
I used the following command to generate my configs:
/usr/local/bin/glusterfs-volgen --name repstore1 --raid 1
10.10.130.11:/data/export 10.10.130.12:/data/export
And mount them on each of the servers as so:
/etc/fstab:
/etc/glusterfs/repstore1-tcp.vol /data/import glusterfs defaults 0
0
Every 12 hours or so, one or other of the servers will lose the mount
and erro...
2010 Apr 19
1
Permission Problems
...VMWare Esxi 4. With one Volume exported as "raid 1".
I mounted the share with the GlusterClient 3.0.2 with the following /etc/fstab line:
/etc/glusterfs/client.vol /mnt/images glusterfs defaults 0 0
The client.vol looks like this:
# auto generated by /usr/bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /usr/bin/glusterfs-volgen --conf-dir=/etc/glusterfs --name=images --raid=1 --transport=tcp --port=10002 --auth=192.168.1.168,192.168.1.167,* gluster2:/exports/sda2/images gluster1:/exports/sda2/images
# RAID 1
# TRANSPORT-TYPE tcp
volume gluster2-1
type protocol/cli...
2018 Mar 05
1
[Gluster-devel] Removal of use-compound-fops option in afr
...code. Sent the patch at
>> https://review.gluster.org/19655
>>
>>
> If I understand it right, as of now AFR is the only component which uses
> Compound FOP. If it stops using that code, should we maintain the compound
> fop codebase at all in other places, like protocol, volgen (decompounder
> etc?)
>
Infra was also supposed to be used by gfapi when compound fops is
introduced. So I think it is not a bad idea to keep it until at least there
is a decision about it. It will be similar to loading feature modules like
quota even when quota is not used on the volume....
2004 Jun 14
2
Samba shares becoming inactive after a while
...erland
+31 (0)252 416530 (voice)
+31 (0)252 419481 (fax)
<http://www.sercom.nl/>
Op al onze offertes, op alle opdrachten aan ons en op alle met ons gesloten
overeenkomsten zijn toepasselijk de METAALUNIEVOORWAARDEN, gedeponeerd ter
Griffie van de Rechtbank te Rotterdam, zoals deze luiden volgens de
laatstelijk aldaar neergelegde tekst. De leveringsvoorwaarden worden u op
verzoek toegezonden.
2011 Jun 14
0
read-ahead performance translator tweaking with 3.2.1?
Hi
Is there a way to tweak the read-ahead settings via the gluster command
line? For example:
gluster volume set somevolumename performance.read-ahead 2
Or is this no longer feasible? With read-ahead set to the default of
8 like was the case with standard volgen generated configs, the amount
of useless reads happening to the bricks is way too high, and on 1 GbE
interconnects causes saturation and performance degradation in no time.
Thanks.
Mohan
2010 Nov 22
1
weird routing issue
Hi,
I have the following tinc grid:
http://keetweej.vanheusden.com/stats/tinc-fvh-network-graph.png
Now a funny thing happens: bpsolxp routes traffic to clientbp via
'server', not directly. Both run 1.0.13. Bug?
Folkert.
--
Nagios user? Check CoffeeSaint, the versatile nagios status display-
monitor. http://vanheusden.com/java/CoffeeSaint/
2017 Jun 05
0
Gluster Monthly Newsletter, May 2017
...mail/gluster-devel/2017-May/052811.html
Reviews older than 90 days - Amar Tumballi -
http://lists.gluster.org/pipermail/gluster-devel/2017-May/052844.html
[Proposal]: Changes to how we test and vote each patch - Amar Tumballi -
http://lists.gluster.org/pipermail/gluster-devel/2017-May/052868.html
Volgen support for loading trace and io-stats translators at specific
points in the graph - Krutika Dhananjay -
http://lists.gluster.org/pipermail/gluster-devel/2017-May/052881.html
Backport for "Add back socket for polling of events immediately..." - Shyam
http://lists.gluster.org/pipermail/glu...
2010 May 31
2
DHT translator problem
Hello,
I am trying to configure a volume using DHT, however after I mount it,
the mount point looks rather strange and when I try to do 'ls' on it I get:
ls: /mnt/gtest: Stale NFS file handle
I can create files and dirs in the mount point, I can list them but I
cant list the mount point itself.
Example:
the folume is mounted on /mnt/gtest
[root at storage2]# ls -l /mnt/
?---------
2010 Apr 14
1
ipv6 via tinc
Hi,
At my provider (xs4all) I've got an ipv6 tunnel working. Now I would
like to distribute ipv6 via the tinc tunnel.
My tinc.conf:
------------
Name=server
AddressFamily=ipv4
Device=/dev/net/tun
PrivateKeyFile=/etc/tinc/fvhglobalnet/rsa_key.priv
GraphDumpFile=|/usr/bin/dot -Tpng -o /var/www/htdocs.keetweej.vanheusden.com/stats/tinc-fvh-network-graph.png
Mode=switch
KeyExpire=299
2011 Jan 12
1
Setting up 3.1
[Repost - last time this didn't seem to work]
I've been running gluster for a couple of years, so I'm quite used to 3.0.x and earlier. I'm looking to upgrade to 3.1.1 for more stability (I'm getting frequent 'file has vanished' errors when rsyncing from 3.0.6) on a bog-standard 2-node dist/rep config. So far it's not going well. I'm running on Ubuntu Lucid x64
2010 May 04
1
Posix warning : Access to ... is crossing device
I have a distributed/replicated setup with Glusterfs 3.0.2, that I'm
testing on 4 servers, each with access to /mnt/gluster (which consists
of all directories /mnt/data01 - data24) on each server. I'm using
configs I built from volgen, but every time I access a file (via an
'ls -l') for the first time, I get all of these messages in my logs on
each server:
[2010-05-04 10:50:30] W [posix.c:246:posix_lstat_with_gen] posix1:
Access to /mnt/data01//.. (on dev 16771) is crossing device (2257)
[2010-05-04 10:50:30] W [posix.c...