unsubscribe gluster
2015-03-17 20:00 GMT+08:00 <gluster-users-request at gluster.org>:
> Send Gluster-users mailing list submissions to
> gluster-users at gluster.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.gluster.org/mailman/listinfo/gluster-users
> or, via email, send a message with subject or body 'help' to
> gluster-users-request at gluster.org
>
> You can reach the person managing the list at
> gluster-users-owner at gluster.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-users digest..."
>
>
> Today's Topics:
>
> 1. Re: Issue with removing directories after disabling quotas
> (Vijaikumar M)
> 2. glusterfs geo-replication - server with two interfaces -
> private IP advertised (Stefan Moravcik)
> 3. adding a node (Pierre L?onard)
> 4. Re: adding a node (JF Le Fill?tre)
> 5. Re: adding a node (Pierre L?onard)
> 6. Re: adding a node (JF Le Fill?tre)
> 7. tune2fs exited with non-zero exit status
> (Osborne, Paul (paul.osborne at canterbury.ac.uk))
> 8. Re: adding a node (Pierre L?onard)
> 9. Re: adding a node (JF Le Fill?tre)
> 10. Re: adding a node (Pierre L?onard)
> 11. Gluster 3.6 issue (F?lix de Lelelis)
> 12. I/O error on replicated volume (Jonathan Heese)
> 13. Re: [ovirt-users] Zombie disks failed to remove.. (Punit Dambiwal)
> 14. Running a SQL database on Gluster on AWS (Jay Strauss)
> 15. Re: I/O error on replicated volume (Ravishankar N)
> 16. VM failed to start | Bad volume specification (Punit Dambiwal)
> 17. Re: Working but some issues (Melkor Lord)
> 18. Re: Working but some issues (Joe Julian)
> 19. Bad volume specification | Connection timeout (Punit Dambiwal)
> 20. Re: adding a node (Pierre L?onard)
> 21. REMINDER: Gluster Community Bug Triage meeting today at 12:00
> UTC (Niels de Vos)
> 22. Re: adding a node (JF Le Fill?tre)
> 23. Re: adding a node (Pierre L?onard)
> 24. Re: Working but some issues (Nico Schottelius)
> 25. GlusterFS 3.4.7beta2 is now available for testing (Kaleb KEITHLEY)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 16 Mar 2015 17:30:41 +0530
> From: Vijaikumar M <vmallika at redhat.com>
> To: JF Le Fill?tre <jean-francois.lefillatre at uni.lu>,
> gluster-users at gluster.org
> Subject: Re: [Gluster-users] Issue with removing directories after
> disabling quotas
> Message-ID: <5506C5E9.2070204 at redhat.com>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
>
> On Monday 16 March 2015 05:04 PM, JF Le Fill?tre wrote:
> > Hello Vijay,
> >
> > Mmmm sorry, I jumped the gun and deleted by hand all the files present
> > on the bricks, so I can't see if there's any link anywhere...
> >
> > It's not the first time I have seen this. Very early in my tests I
had
> > another case of files still present on the bricks but not visible in
> > Gluster, which I solved the same way. At that point I chalked it up to
> > my limited knowledge of GlusterFS and I assumed that I had
misconfigured
> it.
> >
> > What do you suspect that it may be, and what do I have to look for if
it
> > ever happens again?
> We wanted to check file attributes and xattrs on the bricks to see why
> some files where not deleted.
> Is this problem easily re-creatable? If yes, can you please provide the
> test-case?
>
> Thanks,
> Vijay
>
>
> > Thanks!
> > JF
> >
> >
> >
> > On 16/03/15 12:25, Vijaikumar M wrote:
> >> Hi JF,
> >>
> >> This may not be a quota issue. Can you please check if there are
any
> >> linkto files exists on the brick?
> >> On node 'stor104/106', can we get the below output
> >>
> >> #find /zfs/brick0/brick | xargs ls -l
> >> #find /zfs/brick1/brick | xargs ls -l
> >> #find /zfs/brick2/brick | xargs ls -l
> >>
> >> #find /zfs/brick0/brick | xargs getfattr -d -m . -e hex
> >> #find /zfs/brick1/brick | xargs getfattr -d -m . -e hex
> >> #find /zfs/brick2/brick | xargs getfattr -d -m . -e hex
> >>
> >>
> >> Thanks,
> >> Vijay
> >>
> >>
> >>
> >> On Monday 16 March 2015 04:18 PM, JF Le Fill?tre wrote:
> >>> Forgot to mention: the non-empty directories list files like
this:
> >>>
> >>> -?????????? ? ? ? ? ? Hal8723APhyReg.h
> >>> -?????????? ? ? ? ? ? Hal8723UHWImg_CE.h
> >>> -?????????? ? ? ? ? ? hal_com.h
> >>> -?????????? ? ? ? ? ? HalDMOutSrc8723A.h
> >>> -?????????? ? ? ? ? ? HalHWImg8723A_BB.h
> >>> -?????????? ? ? ? ? ? HalHWImg8723A_FW.h
> >>> -?????????? ? ? ? ? ? HalHWImg8723A_MAC.h
> >>> -?????????? ? ? ? ? ? HalHWImg8723A_RF.h
> >>> -?????????? ? ? ? ? ? hal_intf.h
> >>> -?????????? ? ? ? ? ? HalPwrSeqCmd.h
> >>> -?????????? ? ? ? ? ? ieee80211.h
> >>> -?????????? ? ? ? ? ? odm_debug.h
> >>>
> >>> Thanks,
> >>> JF
> >>>
> >>>
> >>> On 16/03/15 11:45, JF Le Fill?tre wrote:
> >>>> Hello all,
> >>>>
> >>>> So, another day another issue. I was trying to play with
quotas on my
> >>>> volume:
> >>>>
> >>>>
>
===============================================================================>
>>>>
> >>>> [root at stor104 ~]# gluster volume status
> >>>> Status of volume: live
> >>>> Gluster process Port Online
Pid
> >>>>
>
------------------------------------------------------------------------------
> >>>>
> >>>> Brick stor104:/zfs/brick0/brick 49167 Y
13446
> >>>> Brick stor104:/zfs/brick1/brick 49168 Y
13457
> >>>> Brick stor104:/zfs/brick2/brick 49169 Y
13468
> >>>> Brick stor106:/zfs/brick0/brick 49159 Y
14158
> >>>> Brick stor106:/zfs/brick1/brick 49160 Y
14169
> >>>> Brick stor106:/zfs/brick2/brick 49161 Y
14180
> >>>> NFS Server on localhost 2049 Y
13483
> >>>> Quota Daemon on localhost N/A Y 13490
> >>>> NFS Server on stor106 2049 Y
14195
> >>>> Quota Daemon on stor106 N/A Y
14202
> >>>> Task Status of Volume live
> >>>>
>
------------------------------------------------------------------------------
> >>>>
> >>>> Task : Rebalance
> >>>> ID :
6bd03709-1f48-49a9-a215-d0a6e6f3ab1e
> >>>> Status : completed
> >>>>
>
===============================================================================>
>>>>
> >>>>
> >>>>
> >>>> Not sure if the "Quota Daemon on localhost ->
N/A" is normal, but
> >>>> that's another topic.
> >>>>
> >>>> While the quotas were enabled, I did some test copying a
whole tree
> >>>> of small files (the Linux kernel sources) to the volume to
see which
> >>>> performance I would get, and it was really low. So I
decided to
> >>>> disable quotas again:
> >>>>
> >>>>
> >>>>
>
===============================================================================>
>>>>
> >>>> [root at stor104 ~]# gluster volume status
> >>>> Status of volume: live
> >>>> Gluster process Port Online
Pid
> >>>>
>
------------------------------------------------------------------------------
> >>>>
> >>>> Brick stor104:/zfs/brick0/brick 49167 Y
13754
> >>>> Brick stor104:/zfs/brick1/brick 49168 Y
13765
> >>>> Brick stor104:/zfs/brick2/brick 49169 Y
13776
> >>>> Brick stor106:/zfs/brick0/brick 49159 Y
14282
> >>>> Brick stor106:/zfs/brick1/brick 49160 Y
14293
> >>>> Brick stor106:/zfs/brick2/brick 49161 Y
14304
> >>>> NFS Server on localhost 2049 Y
13790
> >>>> NFS Server on stor106 2049 Y
14319
> >>>> Task Status of Volume live
> >>>>
>
------------------------------------------------------------------------------
> >>>>
> >>>> Task : Rebalance
> >>>> ID :
6bd03709-1f48-49a9-a215-d0a6e6f3ab1e
> >>>> Status : completed
> >>>>
>
===============================================================================>
>>>>
> >>>>
> >>>>
> >>>> I remounted the volume from the client and tried deleting
the
> >>>> directory containing the sources, which gave me a very
long list of
> >>>> this:
> >>>>
> >>>>
> >>>>
>
===============================================================================>
>>>>
> >>>> rm: cannot remove
> >>>>
>
?/glusterfs/live/linux-3.18.7/tools/testing/selftests/ftrace/test.d/kprobe?:
> >>>> Directory not empty
> >>>> rm: cannot remove
> >>>>
?/glusterfs/live/linux-3.18.7/tools/testing/selftests/ptrace?:
> >>>> Directory not empty
> >>>> rm: cannot remove
> >>>>
>
?/glusterfs/live/linux-3.18.7/tools/testing/selftests/rcutorture/configs/rcu/v0.0?:
> >>>> Directory not empty
> >>>> rm: cannot remove
> >>>>
>
?/glusterfs/live/linux-3.18.7/tools/testing/selftests/rcutorture/configs/rcu/v3.5?:
> >>>> Directory not empty
> >>>> rm: cannot remove
> >>>>
?/glusterfs/live/linux-3.18.7/tools/testing/selftests/powerpc?:
> >>>> Directory not empty
> >>>> rm: cannot remove
> >>>>
> ?/glusterfs/live/linux-3.18.7/tools/perf/scripts/python/Perf-Trace-Util?:
> >>>> Directory not empty
> >>>> rm: cannot remove
> >>>> ?/glusterfs/live/linux-3.18.7/tools/perf/Documentation?:
Directory
> >>>> not empty
> >>>> rm: cannot remove
?/glusterfs/live/linux-3.18.7/tools/perf/ui/tui?:
> >>>> Directory not empty
> >>>> rm: cannot remove
> >>>> ?/glusterfs/live/linux-3.18.7/tools/perf/util/include?:
Directory not
> >>>> empty
> >>>> rm: cannot remove
?/glusterfs/live/linux-3.18.7/tools/lib?: Directory
> >>>> not empty
> >>>> rm: cannot remove
?/glusterfs/live/linux-3.18.7/tools/virtio?:
> >>>> Directory not empty
> >>>> rm: cannot remove ?/glusterfs/live/linux-3.18.7/virt/kvm?:
Directory
> >>>> not empty
> >>>>
>
===============================================================================>
>>>>
> >>>>
> >>>>
> >>>> I did my homework on Google, yet the information I found
was that the
> >>>> cause for this is that the contents of the bricks have
been modified
> >>>> locally. It's definitely not the case here, I have
*not* touched the
> >>>> contents of the bricks.
> >>>>
> >>>> So my question is: is it possible that disabling the
quotas may have
> >>>> had some side effects on the metadata of Gluster? If so,
what can I
> >>>> do to force Gluster to rescan all local directories and
"import"
> >>>> local files?
> >>>>
> >>>> GlusterFS version: 3.6.2
> >>>>
> >>>> The setup of my volume:
> >>>>
> >>>>
> >>>>
>
===============================================================================>
>>>>
> >>>> [root at stor104 ~]# gluster volume info
> >>>> Volume Name: live
> >>>> Type: Distribute
> >>>> Volume ID:
> >>>> Status: Started
> >>>> Number of Bricks: 6
> >>>> Transport-type: tcp
> >>>> Bricks:
> >>>> Brick1: stor104:/zfs/brick0/brick
> >>>> Brick2: stor104:/zfs/brick1/brick
> >>>> Brick3: stor104:/zfs/brick2/brick
> >>>> Brick4: stor106:/zfs/brick0/brick
> >>>> Brick5: stor106:/zfs/brick1/brick
> >>>> Brick6: stor106:/zfs/brick2/brick
> >>>> Options Reconfigured:
> >>>> features.quota: off
> >>>> performance.readdir-ahead: on
> >>>> nfs.volume-access: read-only
> >>>> cluster.data-self-heal-algorithm: full
> >>>> performance.strict-write-ordering: off
> >>>> performance.strict-o-direct: off
> >>>> performance.force-readdirp: off
> >>>> performance.write-behind-window-size: 4MB
> >>>> performance.io-thread-count: 32
> >>>> performance.flush-behind: on
> >>>> performance.client-io-threads: on
> >>>> performance.cache-size: 32GB
> >>>> performance.cache-refresh-timeout: 60
> >>>> performance.cache-max-file-size: 4MB
> >>>> nfs.disable: off
> >>>> cluster.eager-lock: on
> >>>> cluster.min-free-disk: 1%
> >>>> server.allow-insecure: on
> >>>> diagnostics.client-log-level: ERROR
> >>>> diagnostics.brick-log-level: ERROR
> >>>>
>
===============================================================================>
>>>>
> >>>>
> >>>>
> >>>> It is mounted from the client with those fstab options:
> >>>>
> >>>>
>
===============================================================================>
>>>>
> >>>> stor104:live
> >>>>
defaults,backupvolfile-server=stor106,direct-io-mode=disable,noauto
> >>>>
>
===============================================================================>
>>>>
> >>>>
> >>>> Attached are the log files from stor104
> >>>>
> >>>> Thanks a lot for any help!
> >>>> JF
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> Gluster-users mailing list
> >>>> Gluster-users at gluster.org
> >>>> http://www.gluster.org/mailman/listinfo/gluster-users
> >>>>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 16 Mar 2015 13:10:48 +0100
> From: Stefan Moravcik <smoravcik at newsweaver.com>
> To: gluster-users at gluster.org
> Subject: [Gluster-users] glusterfs geo-replication - server with two
> interfaces - private IP advertised
> Message-ID: <D51E5B92-87A6-4213-AFBD-04C3207E6E34 at newsweaver.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> I have been trying to setup a geo replication with glusterfs servers.
> Everything worked as expected in my test environment, on my staging
> environment, but then i tried the production and got stuck.
>
> Let say I have
>
> gluster fs server is on public ip 1.1.1.1
>
> gluster fs slave is on public 2.2.2.2, but this IP is on interface eth1
> The eth0 on gluster fs slave server is 192.168.0.1.
>
> So when i start the command on 1.1.1.1 (firewall and ssh keys are set
> properly)
>
> gluster volume geo-replication vol0 2.2.2.2::vol0 create push-pem
> I get an error.
>
> Unable to fetch slave volume details. Please check the slave cluster and
> slave volume. geo-replication command failed
>
> The error is not that important in this case, the problem is the slave IP
> address
>
> 2015-03-16T11:41:08.101229+00:00 xxx kernel: TCP LOGDROP: IN= OUT=eth0
> SRC=1.1.1.1 DST=192.168.0.1 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=24243 DF
> PROTO=TCP SPT=1015 DPT=24007 WINDOW=14600 RES=0x00 SYN URGP=0
> As you can see in the firewall drop log above, the port 24007 of the slave
> gluster daemon is advertised on private IP of the interface eth0 on slave
> server and should be the IP of the eth1 private IP. So master cannot
> connect and will time out
>
> Is there a way to force gluster server to advertise interface eth1 or bind
> to it only?
>
> Thank you
> --
>
>
*******************************************************************************************************************************************************
> ********************
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please reply to the sender
> indicating that fact and delete the copy you received.
> In addition, if you are not the intended recipient, you should not print,
> copy, retransmit, disseminate, or otherwise use the information
> contained in this communication. Thank you.
>
> Newsweaver is a Trade Mark of E-Search Ltd. Registered in Ireland No.
> 254994.
> Registered Office: 2200 Airport Business Park, Kinsale Road, Cork, Ireland.
> International Telephone Number: +353 21 2427277.
>
>
*******************************************************************************************************************************************************
> ********************
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/a0f25019/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Mon, 16 Mar 2015 16:17:04 +0100
> From: Pierre L?onard <pleonard at jouy.inra.fr>
> To: "gluster-users at gluster.org" <gluster-users at
gluster.org>
> Subject: [Gluster-users] adding a node
> Message-ID: <5506F3F0.40803 at jouy.inra.fr>
> Content-Type: text/plain; charset="utf-8";
Format="flowed"
>
> Hi All,
>
> I am note the alone guy having problem with the addition of a node to an
> existing distributed stripped or replicated volume.
> And I never read a real solution.
>
> My configuration is the following :
>
> * four node with a gvHome stripped 2 volume ;
> * first I probe the new node ans its OK ;
> * then I start to add the node:/brick to the four other and get the
> answer that I have to add a multiple of the striped so 4 other node.
> But I have only one ;
> * So I try to make the addition from the new node. If I list the
> volume on the new node it recognize the stripped volume. So I stop
> the volume on the four and add : gluster volume add-brick gvHome
> stripe 2 boogy1:/mnt/gvHome/brick1 boogy2:/mnt/gvHome/brick1
> boogy3:/mnt/gvHome/brick1 boogy4:/mnt/gvHome/brick1 the answer is
> the following :
>
> [root at moody ~]# gluster volume add-brick gvHome stripe 2
> boogy1:/mnt/gvHome/brick1 boogy2:/mnt/gvHome/brick1
> boogy3:/mnt/gvHome/brick1 boogy4:/mnt/gvHome/brick1
> Changing the 'stripe count' of the volume is not a supported
feature. In
> some cases it may result in data loss on the volume. Also there may be
> issues with regular filesystem operations on the volume after the
> change. Do you really want to continue with 'stripe' count option ?
(y/n)
> y
> volume add-brick: failed: Staging failed on boogy2. Error: Brick:
> boogy2:/mnt/gvHome/brick1 not available. Brick may be containing or be
> contained by an existing brick
> Staging failed on boogy3. Error: Brick: boogy3:/mnt/gvHome/brick1 not
> available. Brick may be containing or be contained by an existing brick
> Staging failed on boogy1. Error: Brick: boogy1:/mnt/gvHome/brick1 not
> available. Brick may be containing or be contained by an existing brick
> Staging failed on 138.102.172.94. Error: Brick:
> boogy4:/mnt/gvHome/brick1 not available. Brick may be containing or be
> contained by an existing brick
>
>
> So how to add node one by one ?
>
> Many Thank's
>
> sincerely
>
> --
> Signature electronique
> INRA <http://www.inra.fr>
> *Pierre L?onard*
> *Senior IT Manager*
> *MetaGenoPolis*
> Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> T?l. : +33 (0)1 34 65 29 78
>
> Centre de recherche INRA
> Domaine de Vilvert ? B?t. 325 R+1
> 78 352 Jouy-en-Josas CEDEX
> France
> www.mgps.eu <http://www.mgps.eu>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/11707f48/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: logo-INRA2013.gif
> Type: image/gif
> Size: 2560 bytes
> Desc: http://www.inra.fr/
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/11707f48/attachment-0001.gif
> >
>
> ------------------------------
>
> Message: 4
> Date: Mon, 16 Mar 2015 16:28:44 +0100
> From: JF Le Fill?tre <jean-francois.lefillatre at uni.lu>
> To: <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <5506F6AC.3060907 at uni.lu>
> Content-Type: text/plain; charset="utf-8"
>
>
> Hello,
>
> You cannot. As the message said, you cannot change the stripe count. So
> you can add 4 bricks and you have a distributed + striped volume, but
> you cannot change the stripe count by re-adding existing bricks into the
> volume with a different stripe count, or by adding an additional brick
> to the volume.
>
> Your email isn't very clear; could you please send:
> - the list of commands in the order that you typed them;
> - the output of "gluster volume info" and "gluster volume
status"
>
> Thanks!
> JF
>
>
>
> On 16/03/15 16:17, Pierre L?onard wrote:
> > Hi All,
> >
> > I am note the alone guy having problem with the addition of a node to
an
> > existing distributed stripped or replicated volume.
> > And I never read a real solution.
> >
> > My configuration is the following :
> >
> > * four node with a gvHome stripped 2 volume ;
> > * first I probe the new node ans its OK ;
> > * then I start to add the node:/brick to the four other and get the
> > answer that I have to add a multiple of the striped so 4 other
node.
> > But I have only one ;
> > * So I try to make the addition from the new node. If I list the
> > volume on the new node it recognize the stripped volume. So I stop
> > the volume on the four and add : gluster volume add-brick gvHome
> > stripe 2 boogy1:/mnt/gvHome/brick1 boogy2:/mnt/gvHome/brick1
> > boogy3:/mnt/gvHome/brick1 boogy4:/mnt/gvHome/brick1 the answer is
> > the following :
> >
> > [root at moody ~]# gluster volume add-brick gvHome stripe 2
> > boogy1:/mnt/gvHome/brick1 boogy2:/mnt/gvHome/brick1
> > boogy3:/mnt/gvHome/brick1 boogy4:/mnt/gvHome/brick1
> > Changing the 'stripe count' of the volume is not a supported
feature. In
> > some cases it may result in data loss on the volume. Also there may be
> > issues with regular filesystem operations on the volume after the
> > change. Do you really want to continue with 'stripe' count
option ?
> (y/n) y
> > volume add-brick: failed: Staging failed on boogy2. Error: Brick:
> > boogy2:/mnt/gvHome/brick1 not available. Brick may be containing or be
> > contained by an existing brick
> > Staging failed on boogy3. Error: Brick: boogy3:/mnt/gvHome/brick1 not
> > available. Brick may be containing or be contained by an existing
brick
> > Staging failed on boogy1. Error: Brick: boogy1:/mnt/gvHome/brick1 not
> > available. Brick may be containing or be contained by an existing
brick
> > Staging failed on 138.102.172.94. Error: Brick:
> > boogy4:/mnt/gvHome/brick1 not available. Brick may be containing or be
> > contained by an existing brick
> >
> >
> > So how to add node one by one ?
> >
> > Many Thank's
> >
> > sincerely
> >
> > --
> > Signature electronique
> > INRA <http://www.inra.fr>
> >
> > *Pierre L?onard*
> > *Senior IT Manager*
> > *MetaGenoPolis*
> > Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> > T?l. : +33 (0)1 34 65 29 78
> >
> > Centre de recherche INRA
> > Domaine de Vilvert ? B?t. 325 R+1
> > 78 352 Jouy-en-Josas CEDEX
> > France
> > www.mgps.eu <http://www.mgps.eu>
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>
> --
>
> Jean-Fran?ois Le Fill?tre
> -------------------------------
> HPC Systems Administrator
> LCSB - University of Luxembourg
> -------------------------------
> PGP KeyID 0x134657C6
>
>
> ------------------------------
>
> Message: 5
> Date: Mon, 16 Mar 2015 16:56:45 +0100
> From: Pierre L?onard <pleonard at jouy.inra.fr>
> To: JF Le Fill?tre <jean-francois.lefillatre at uni.lu>,
> "gluster-users at gluster.org" <gluster-users at
gluster.org>
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <5506FD3D.1080200 at jouy.inra.fr>
> Content-Type: text/plain; charset="utf-8";
Format="flowed"
>
> Hi JF Le Fill?tre,
>
> Hello,
>
> You cannot. As the message said, you cannot change the stripe count.
>
> I don't change the strip count, the number is the same.
>
> So
> you can add 4 bricks and you have a distributed + striped volume, but
> you cannot change the stripe count by re-adding existing bricks into the
> volume with a different stripe count, or by adding an additional brick
> to the volume.
>
> That is the real question : "is it possible to add a node to a stripe
> volume one by one ?"
> I not when I receive a new node I can't insert it in the global cluster
> service !
>
>
> Your email isn't very clear; could you please send:
> - the list of commands in the order that you typed them;
> - the output of "gluster volume info" and "gluster volume
status"
>
> On one of the four node :
> [root at boogy2 ~]# gluster volume stop gvHome
>
> on the new node :
> [root at moody gvHome]# gluster volume add-brick gvHome
> moody:/mnt/gvHome/brick1
> volume add-brick: failed: Incorrect number of bricks supplied 1 with count
> 2
>
> [root at boogy2 ~]# gluster volume info gvHome
>
> Volume Name: gvHome
> Type: Distributed-Stripe
> Volume ID: aa7bc5be-e6f6-43b0-bba2-b9d51ed2e4ef
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: boogy1:/mnt/gvHome/brick1
> Brick2: boogy2:/mnt/gvHome/brick1
> Brick3: boogy3:/mnt/gvHome/brick1
> Brick4: boogy4:/mnt/gvHome/brick1
>
>
> [root at boogy2 ~]# gluster volume status gvHome
> Status of volume: gvHome
> Gluster process Port Online Pid
>
>
------------------------------------------------------------------------------
> Brick boogy1:/mnt/gvHome/brick1 49153 Y
> 21735
> Brick boogy2:/mnt/gvHome/brick1 49153 Y
> 121607
> Brick boogy3:/mnt/gvHome/brick1 49153 Y
> 26828
> Brick boogy4:/mnt/gvHome/brick1 49153 Y
> 116839
> NFS Server on localhost N/A N N/A
> NFS Server on moody N/A N N/A
> NFS Server on 138.102.172.94 2049 Y
> 116853
> NFS Server on boogy1 2049 Y
> 21749
> NFS Server on boogy3 2049 Y
> 26843
>
> Task Status of Volume gvHome
>
>
------------------------------------------------------------------------------
> There are no active volume tasks
>
>
>
> [root at boogy2 ~]# glusterd -V
> glusterfs 3.6.1 built on Nov 7 2014 15:15:48
>
>
> that's all.
>
>
> Many thank's
>
> sincereley
>
> --
> Signature electronique
> INRA <http://www.inra.fr>
> *Pierre L?onard*
> *Senior IT Manager*
> *MetaGenoPolis*
> Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> T?l. : +33 (0)1 34 65 29 78
>
> Centre de recherche INRA
> Domaine de Vilvert ? B?t. 325 R+1
> 78 352 Jouy-en-Josas CEDEX
> France
> www.mgps.eu <http://www.mgps.eu>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/526d34d5/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: logo-INRA2013.gif
> Type: image/gif
> Size: 2560 bytes
> Desc: http://www.inra.fr/
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/526d34d5/attachment-0001.gif
> >
>
> ------------------------------
>
> Message: 6
> Date: Mon, 16 Mar 2015 17:16:47 +0100
> From: JF Le Fill?tre <jean-francois.lefillatre at uni.lu>
> To: Pierre L?onard <pleonard at jouy.inra.fr>,
> "gluster-users at gluster.org" <gluster-users at
gluster.org>
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <550701EF.4040101 at uni.lu>
> Content-Type: text/plain; charset="utf-8"
>
>
> Hi,
>
> OK, I was confused because in your original email you mentioned "a
> multiple of the striped so 4 other node".
>
> So yes your stripe count is 2, so you have to add bricks 2 by 2, as the
> error message says. If you didn't, then it would mean that you would
> distribute your data over two 2-brick stripes, and one 1-brick single
> volume, which I am reasonably sure wouldn't work.
>
> If you want to be able to add bricks one by one, you would need a
> regular distributed volume, without:
> - striping (requires adding as many drives as stripe count)
> - replication (requires adding as many drives as stripe replication count)
> - dispersion (requires adding as many drives as whatever your dispersion
> disk count is)
>
> The only two things that you can extend is the number of brick groups in
> a distributed volume, with a similar brick group, or the number of
> replicas (but I'm sure that there are corner cases where you can't
do
> it, I haven't tried them yet).
>
> Is that clearer now?
>
> Thanks!
> JF
>
>
>
> On 16/03/15 16:56, Pierre L?onard wrote:
> > Hi JF Le Fill?tre,
> >
> > Hello,
> >
> > You cannot. As the message said, you cannot change the stripe count.
> >
> > I don't change the strip count, the number is the same.
> >
> > So
> > you can add 4 bricks and you have a distributed + striped volume, but
> > you cannot change the stripe count by re-adding existing bricks into
the
> > volume with a different stripe count, or by adding an additional brick
> > to the volume.
> >
> > That is the real question : "is it possible to add a node to a
stripe
> volume one by one ?"
> > I not when I receive a new node I can't insert it in the global
cluster
> service !
> >
> >
> > Your email isn't very clear; could you please send:
> > - the list of commands in the order that you typed them;
> > - the output of "gluster volume info" and "gluster
volume status"
> >
> > On one of the four node :
> > [root at boogy2 ~]# gluster volume stop gvHome
> >
> > on the new node :
> > [root at moody gvHome]# gluster volume add-brick gvHome
> moody:/mnt/gvHome/brick1
> > volume add-brick: failed: Incorrect number of bricks supplied 1 with
> count 2
> >
> > [root at boogy2 ~]# gluster volume info gvHome
> >
> > Volume Name: gvHome
> > Type: Distributed-Stripe
> > Volume ID: aa7bc5be-e6f6-43b0-bba2-b9d51ed2e4ef
> > Status: Started
> > Number of Bricks: 2 x 2 = 4
> > Transport-type: tcp
> > Bricks:
> > Brick1: boogy1:/mnt/gvHome/brick1
> > Brick2: boogy2:/mnt/gvHome/brick1
> > Brick3: boogy3:/mnt/gvHome/brick1
> > Brick4: boogy4:/mnt/gvHome/brick1
> >
> >
> > [root at boogy2 ~]# gluster volume status gvHome
> > Status of volume: gvHome
> > Gluster process Port Online
> Pid
> >
>
------------------------------------------------------------------------------
> > Brick boogy1:/mnt/gvHome/brick1 49153 Y
> 21735
> > Brick boogy2:/mnt/gvHome/brick1 49153 Y
> 121607
> > Brick boogy3:/mnt/gvHome/brick1 49153 Y
> 26828
> > Brick boogy4:/mnt/gvHome/brick1 49153 Y
> 116839
> > NFS Server on localhost N/A N
> N/A
> > NFS Server on moody N/A N
> N/A
> > NFS Server on 138.102.172.94 2049 Y
> 116853
> > NFS Server on boogy1 2049 Y
> 21749
> > NFS Server on boogy3 2049 Y
> 26843
> >
> > Task Status of Volume gvHome
> >
>
------------------------------------------------------------------------------
> > There are no active volume tasks
> >
> >
> >
> > [root at boogy2 ~]# glusterd -V
> > glusterfs 3.6.1 built on Nov 7 2014 15:15:48
> >
> >
> > that's all.
> >
> >
> > Many thank's
> >
> > sincereley
> >
> > --
> > Signature electronique
> > INRA <http://www.inra.fr>
> >
> > *Pierre L?onard*
> > *Senior IT Manager*
> > *MetaGenoPolis*
> > Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> > T?l. : +33 (0)1 34 65 29 78
> >
> > Centre de recherche INRA
> > Domaine de Vilvert ? B?t. 325 R+1
> > 78 352 Jouy-en-Josas CEDEX
> > France
> > www.mgps.eu <http://www.mgps.eu>
> >
>
> --
>
> Jean-Fran?ois Le Fill?tre
> -------------------------------
> HPC Systems Administrator
> LCSB - University of Luxembourg
> -------------------------------
> PGP KeyID 0x134657C6
>
>
> ------------------------------
>
> Message: 7
> Date: Mon, 16 Mar 2015 16:22:03 +0000
> From: "Osborne, Paul (paul.osborne at canterbury.ac.uk)"
> <paul.osborne at canterbury.ac.uk>
> To: "gluster-users at gluster.org" <gluster-users at
gluster.org>
> Subject: [Gluster-users] tune2fs exited with non-zero exit status
> Message-ID:
> <
> AM3PR06MB1160480D807B23B1660DEE9BB020 at
AM3PR06MB116.eurprd06.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi,
>
> I am just looking through my logs and am seeing a lot of entries of the
> form:
>
> [2015-03-16 16:02:55.553140] I
> [glusterd-handler.c:3530:__glusterd_handle_status_volume] 0-management:
> Received status volume req for volume wiki
> [2015-03-16 16:02:55.561173] E
> [glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management:
> tune2fs exited with non-zero exit status
> [2015-03-16 16:02:55.561204] E
> [glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management:
> failed to get inode size
>
> Having had a rummage I *suspect* it is because gluster is trying to get
> the volume status by querying the superblock on the filesystem for a brick
> volume. However this is an issue as when the volume was created it was
> done so in the form:
>
> root at gfsi-rh-01:/mnt# gluster volume create gfs1 replica 2 transport tcp
\
> gfsi-rh-01:/srv/hod/wiki \
> gfsi-isr-01:/srv/hod/wiki force
>
> Where the those paths to the bricks are not the raw paths but instead are
> paths to the mount points on the local server.
>
> Volume status returns:
> gluster volume status wiki
> Status of volume: wiki
> Gluster process
> Port Online Pid
>
>
------------------------------------------------------------------------------
> Brick gfsi-rh-01.core.canterbury.ac.uk:/srv/hod/wiki 49157 Y
> 3077
> Brick gfsi-isr-01.core.canterbury.ac.uk:/srv/hod/wiki 49156 Y
> 3092
> Brick gfsi-cant-01.core.canterbury.ac.uk:/srv/hod/wiki 49152 Y
> 2908
> NFS Server on localhost
> 2049 Y 35065
> Self-heal Daemon on localhost
> N/A Y 35073
> NFS Server on gfsi-cant-01.core.canterbury.ac.uk 2049
> Y 2920
> Self-heal Daemon on gfsi-cant-01.core.canterbury.ac.uk N/A
> Y 2927
> NFS Server on gfsi-isr-01.core.canterbury.ac.uk 2049
> Y 32680
> Self-heal Daemon on gfsi-isr-01.core.canterbury.ac.uk N/A Y
> 32687
> Task Status of Volume wiki
>
>
------------------------------------------------------------------------------
> There are no active volume tasks
>
> Which is what I would expect.
>
> Interestingly to check my thoughts:
>
> # tune2fs -l /srv/hod/wiki/
> tune2fs 1.42.5 (29-Jul-2012)
> tune2fs: Attempt to read block from filesystem resulted in short read
> while trying to open /srv/hod/wiki/
> Couldn't find valid filesystem superblock.
>
> Does what I expect as it is checking a mount point and is what it looks
> like gluster is trying to do.
>
> But:
>
> # tune2fs -l /dev/mapper/bricks-wiki
> tune2fs 1.42.5 (29-Jul-2012)
> Filesystem volume name: wiki
> Last mounted on: /srv/hod/wiki
> Filesystem UUID: a75306ac-31fa-447d-9da7-23ef66d9756b
> Filesystem magic number: 0xEF53
> Filesystem revision #: 1 (dynamic)
> Filesystem features: has_journal ext_attr resize_inode dir_index
> filetype needs_recovery extent flex_bg sparse_super large_file huge_file
> uninit_bg dir_nlink extra_isize
> Filesystem flags: signed_directory_hash
> Default mount options: user_xattr acl
> Filesystem state: clean
> <snipped>
>
> This leaves me with a couple of questions:
>
> Is there any way that I can get this sorted in the gluster configuration
> so that it actually checks the raw volume rather than the local mount point
> for that volume?
>
> Should the volume have been created using the raw path /dev/mapper/.....
> rather than the mount point?
>
> Or should I have created the volume (as I *now* see in the RedHat Storage
> Admin Guide) - under a sub directrory directory below the mounted filestore
> (ie: /srv/hod/wiki/brick ?
>
> If I need to move the data and recreate the bricks it is not an issue for
> me as this is still proof of concept for what we are doing, what I need to
> know whether doing so will stop the continual log churn.
>
> Many thanks
>
> Paul
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/a080617b/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 8
> Date: Mon, 16 Mar 2015 17:30:04 +0100
> From: Pierre L?onard <pleonard at jouy.inra.fr>
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <5507050C.2060508 at jouy.inra.fr>
> Content-Type: text/plain; charset="utf-8";
Format="flowed"
>
> Hi,
> > Hi,
> >
> > OK, I was confused because in your original email you mentioned
"a
> > multiple of the striped so 4 other node".
> >
> > So yes your stripe count is 2, so you have to add bricks 2 by 2, as
the
> > error message says. If you didn't, then it would mean that you
would
> > distribute your data over two 2-brick stripes, and one 1-brick single
> > volume, which I am reasonably sure wouldn't work.
> >
> > If you want to be able to add bricks one by one, you would need a
> > regular distributed volume, without:
> > - striping (requires adding as many drives as stripe count)
> > - replication (requires adding as many drives as stripe replication
> count)
> > - dispersion (requires adding as many drives as whatever your
dispersion
> > disk count is)
> >
> > The only two things that you can extend is the number of brick groups
in
> > a distributed volume, with a similar brick group, or the number of
> > replicas (but I'm sure that there are corner cases where you
can't do
> > it, I haven't tried them yet).
> >
> > Is that clearer now?
> Yes.
> So I have to find another node, or maybe is it possible to create a
> second brick on the new node ?
>
> Many Thank's.
>
> --
> Signature electronique
> INRA <http://www.inra.fr>
> *Pierre L?onard*
> *Senior IT Manager*
> *MetaGenoPolis*
> Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> T?l. : +33 (0)1 34 65 29 78
>
> Centre de recherche INRA
> Domaine de Vilvert ? B?t. 325 R+1
> 78 352 Jouy-en-Josas CEDEX
> France
> www.mgps.eu <http://www.mgps.eu>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/33b6a618/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: logo-INRA2013.gif
> Type: image/gif
> Size: 2560 bytes
> Desc: http://www.inra.fr/
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/33b6a618/attachment-0001.gif
> >
>
> ------------------------------
>
> Message: 9
> Date: Mon, 16 Mar 2015 17:39:47 +0100
> From: JF Le Fill?tre <jean-francois.lefillatre at uni.lu>
> To: <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <55070753.20100 at uni.lu>
> Content-Type: text/plain; charset="utf-8"
>
>
> Hello again,
>
> Yes both are possible. What matters in that case is that you have two
> bricks to keep the stripe count of two.
>
> Note that if you split your node into two bricks and stripe over them,
> you will share the bandwidth between those bricks, and they will likely
> be of different size than the other brick stripes. From time to time
> you'll probably have to rebalance.
>
> Thanks,
> JF
>
>
>
> On 16/03/15 17:30, Pierre L?onard wrote:
> > Hi,
> >> Hi,
> >>
> >> OK, I was confused because in your original email you mentioned
"a
> >> multiple of the striped so 4 other node".
> >>
> >> So yes your stripe count is 2, so you have to add bricks 2 by 2,
as the
> >> error message says. If you didn't, then it would mean that you
would
> >> distribute your data over two 2-brick stripes, and one 1-brick
single
> >> volume, which I am reasonably sure wouldn't work.
> >>
> >> If you want to be able to add bricks one by one, you would need a
> >> regular distributed volume, without:
> >> - striping (requires adding as many drives as stripe count)
> >> - replication (requires adding as many drives as stripe
replication
> count)
> >> - dispersion (requires adding as many drives as whatever your
dispersion
> >> disk count is)
> >>
> >> The only two things that you can extend is the number of brick
groups in
> >> a distributed volume, with a similar brick group, or the number of
> >> replicas (but I'm sure that there are corner cases where you
can't do
> >> it, I haven't tried them yet).
> >>
> >> Is that clearer now?
> > Yes.
> > So I have to find another node, or maybe is it possible to create a
> > second brick on the new node ?
> >
> > Many Thank's.
> >
> > --
> > Signature electronique
> > INRA <http://www.inra.fr>
> >
> > *Pierre L?onard*
> > *Senior IT Manager*
> > *MetaGenoPolis*
> > Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> > T?l. : +33 (0)1 34 65 29 78
> >
> > Centre de recherche INRA
> > Domaine de Vilvert ? B?t. 325 R+1
> > 78 352 Jouy-en-Josas CEDEX
> > France
> > www.mgps.eu <http://www.mgps.eu>
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>
> --
>
> Jean-Fran?ois Le Fill?tre
> -------------------------------
> HPC Systems Administrator
> LCSB - University of Luxembourg
> -------------------------------
> PGP KeyID 0x134657C6
>
>
> ------------------------------
>
> Message: 10
> Date: Mon, 16 Mar 2015 17:46:11 +0100
> From: Pierre L?onard <pleonard at jouy.inra.fr>
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <550708D3.7050606 at jouy.inra.fr>
> Content-Type: text/plain; charset="utf-8";
Format="flowed"
>
> Hi,
> > Hello again,
> >
> > Yes both are possible. What matters in that case is that you have two
> > bricks to keep the stripe count of two.
> >
> > Note that if you split your node into two bricks and stripe over them,
> > you will share the bandwidth between those bricks, and they will
likely
> > be of different size than the other brick stripes. From time to time
> > you'll probably have to rebalance.
> >
> > Thanks,
> > JF
> I think, that is the solution for today.
> I have enough disc for creating another brick. and the network is 10Gbs.
> so I am not afraid by the throuput needed.
>
> Many thank'k to you for that exchange and ideas.
>
> sincerely.
> --
> Signature electronique
> INRA <http://www.inra.fr>
> *Pierre L?onard*
> *Senior IT Manager*
> *MetaGenoPolis*
> Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> T?l. : +33 (0)1 34 65 29 78
>
> Centre de recherche INRA
> Domaine de Vilvert ? B?t. 325 R+1
> 78 352 Jouy-en-Josas CEDEX
> France
> www.mgps.eu <http://www.mgps.eu>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/ebc9a3e0/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: logo-INRA2013.gif
> Type: image/gif
> Size: 2560 bytes
> Desc: http://www.inra.fr/
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/ebc9a3e0/attachment-0001.gif
> >
>
> ------------------------------
>
> Message: 11
> Date: Mon, 16 Mar 2015 18:30:21 +0100
> From: F?lix de Lelelis <felix.delelisdd at gmail.com>
> To: gluster-users at gluster.org
> Subject: [Gluster-users] Gluster 3.6 issue
> Message-ID:
> <CAL4JRLmUPpRa> CHCxZKPHmQTg41fYd6B91DWgDxSJPErEgHpUg at
mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> I have a cluster with 2 nodes, and sometimes when I lunch gluster volume
> status, appear a error on the log:
>
> [2015-03-16 17:24:25.215352] E
> [glusterd-utils.c:7364:glusterd_add_inode_size_to_dict] 0-management:
> xfs_info exited with non-zero exit status
> [2015-03-16 17:24:25.215379] E
> [glusterd-utils.c:7390:glusterd_add_inode_size_to_dict] 0-management:
> failed to get inode size
>
> That you can be due?
>
> Thanks
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/a9874d12/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 12
> Date: Mon, 16 Mar 2015 20:44:29 +0000
> From: Jonathan Heese <jheese at inetu.net>
> To: "gluster-users at gluster.org" <gluster-users at
gluster.org>
> Subject: [Gluster-users] I/O error on replicated volume
> Message-ID: <79aaa9ff350749d6a313601caf3c2d6b at
int-exch6.int.inetu.net>
> Content-Type: text/plain; charset="us-ascii"
>
> Hello,
>
> So I resolved my previous issue with split-brains and the lack of
> self-healing by dropping my installed glusterfs* packages from 3.6.2 to
> 3.5.3, but now I've picked up a new issue, which actually makes normal
use
> of the volume practically impossible.
>
> A little background for those not already paying close attention:
> I have a 2 node 2 brick replicating volume whose purpose in life is to
> hold iSCSI target files, primarily for use to provide datastores to a
> VMware ESXi cluster. The plan is to put a handful of image files on the
> Gluster volume, mount them locally on both Gluster nodes, and run tgtd on
> both, pointed to the image files on the mounted gluster volume. Then the
> ESXi boxes will use multipath (active/passive) iSCSI to connect to the
> nodes, with automatic failover in case of planned or unplanned downtime of
> the Gluster nodes.
>
> In my most recent round of testing with 3.5.3, I'm seeing a massive
> failure to write data to the volume after about 5-10 minutes, so I've
> simplified the scenario a bit (to minimize the variables) to: both Gluster
> nodes up, only one node (duke) mounted and running tgtd, and just regular
> (single path) iSCSI from a single ESXi server.
>
> About 5-10 minutes into migration a VM onto the test datastore,
> /var/log/messages on duke gets blasted with a ton of messages exactly like
> this:
> Mar 15 22:24:06 duke tgtd: bs_rdwr_request(180) io error 0x1781e00 2a -1
> 512 22971904, Input/output error
>
> And /var/log/glusterfs/mnt-gluster_disk.log gets blased with a ton of
> messages exactly like this:
> [2015-03-16 02:24:07.572279] W [fuse-bridge.c:2242:fuse_writev_cbk]
> 0-glusterfs-fuse: 635299: WRITE => -1 (Input/output error)
>
> And the write operation from VMware's side fails as soon as these
messages
> start.
>
> I don't see any other errors (in the log files I know of) indicating
the
> root cause of these i/o errors. I'm sure that this is not enough
> information to tell what's going on, but can anyone help me figure out
what
> to look at next to figure this out?
>
> I've also considered using Dan Lambright's libgfapi gluster module
for
> tgtd (or something similar) to avoid going through FUSE, but I'm not
sure
> whether that would be irrelevant to this problem, since I'm not 100%
sure
> if it lies in FUSE or elsewhere.
>
> Thanks!
>
> Jon Heese
> Systems Engineer
> INetU Managed Hosting
> P: 610.266.7441 x 261
> F: 610.266.7434
> www.inetu.net<https://www.inetu.net/>
> ** This message contains confidential information, which also may be
> privileged, and is intended only for the person(s) addressed above. Any
> unauthorized use, distribution, copying or disclosure of confidential
> and/or privileged information is strictly prohibited. If you have received
> this communication in error, please erase all copies of the message and its
> attachments and notify the sender immediately via reply e-mail. **
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/68d109cd/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 13
> Date: Tue, 17 Mar 2015 09:07:12 +0800
> From: Punit Dambiwal <hypunit at gmail.com>
> To: Vered Volansky <vered at redhat.com>
> Cc: Martin Pavlik <mpavlik at redhat.com>, "gluster-users at
gluster.org"
> <gluster-users at gluster.org>, Martin Perina <mperina
at redhat.com
> >,
> "users at ovirt.org" <users at ovirt.org>
> Subject: Re: [Gluster-users] [ovirt-users] Zombie disks failed to
> remove..
> Message-ID:
> <CAGZcrBnRMQ2eCEEhtwrwmLK30qeJd1h+TrrgaV> 97CZ5nTrC4A at
mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> Can any body help on this....or it's bug in ovirt or gluster..??
>
> On Thu, Feb 26, 2015 at 12:07 PM, Punit Dambiwal <hypunit at
gmail.com>
> wrote:
>
> > Hi Vered,
> >
> > Please find the attached logs...
> >
> > Thanks,
> > Punit
> >
> > On Wed, Feb 25, 2015 at 2:16 PM, Vered Volansky <vered at
redhat.com>
> wrote:
> >
> >> Please send vdsm and engine logs so we can see why this happened.
> >>
> >> Regards,
> >> Vered
> >>
> >> ----- Original Message -----
> >> > From: "Punit Dambiwal" <hypunit at gmail.com>
> >> > To: users at ovirt.org, gluster-users at gluster.org,
"Martin Pavlik" <
> >> mpavlik at redhat.com>, "Martin Perina"
> >> > <mperina at redhat.com>
> >> > Sent: Tuesday, February 24, 2015 8:50:04 PM
> >> > Subject: [ovirt-users] Zombie disks failed to remove..
> >> >
> >> > Hi,
> >> >
> >> > In my ovirt infra...ovirt (3.5.1) with glusterfs
(3.6.1)....now when i
> >> try to
> >> > remove the vm...the vm remove successfully but the vm disk
(vdisk)
> >> remain
> >> > there.....if i try to remove this unattached disk..it's
failed to
> >> remove....
> >> >
> >> >
> >> >
> >> > Thanks,
> >> > Punit
> >> >
> >> > _______________________________________________
> >> > Users mailing list
> >> > Users at ovirt.org
> >> > http://lists.ovirt.org/mailman/listinfo/users
> >> >
> >>
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/29d2cc85/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 14
> Date: Mon, 16 Mar 2015 21:39:09 -0500
> From: Jay Strauss <me at heyjay.com>
> To: gluster-users at gluster.org
> Subject: [Gluster-users] Running a SQL database on Gluster on AWS
> Message-ID:
> <CANpdWGXynaqof+Y> g8BQaA-qS4k7Fm_V6+LmBpCf1tAZOqrNPA at
mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> Apologies, if this is worn earth, first time poster. I did the search the
> archives and didn't see any threads like mine.
>
> I want to setup postgresql upon gluster. I intend to build my cluster upon
> AWS. I'm running a variant of postgresql, which is a parallel version,
> that lets one start postgres on multiple nodes, each which may read the
> database files.
>
> Questions:
>
> 1) I want to run upon Ubuntu v14.04 LTS, which packages would you
> recommend?
> /pub/gluster/glusterfs/LATEST/Debian
> Wheezy or Jessie or other??
>
> 2) I am going to use AWS and EBS volumes. I watched a video by Louis
> Zuckerman, in which he indicated to use "many bricks per server".
> a) does that mean many EBS volumes per server? Or multiple bricks per EBS
> volume?
> b) How many is "many"?
> - My database will have 100s (maybe 1000s) of files, each will be between
> 10-500MB.
>
> 3) Can I use the EC2 units to do double duty, running the gluster process
> AND my postgresql processes (or is that a bad idea)?
> a) if I can do double duty, can my postgresql processes gain performance
> when they read local files? or do all process read via the network,
> regardless of whether the process wants files that may actually be local.
>
> Any other suggestions regarding running, setting up, on AWS?
>
> Thanks
> Jay
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/a5c748e3/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 15
> Date: Tue, 17 Mar 2015 10:05:05 +0530
> From: Ravishankar N <ravishankar at redhat.com>
> To: Jonathan Heese <jheese at inetu.net>, "gluster-users at
gluster.org"
> <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] I/O error on replicated volume
> Message-ID: <5507AEF9.1090703 at redhat.com>
> Content-Type: text/plain; charset="windows-1252";
Format="flowed"
>
>
> On 03/17/2015 02:14 AM, Jonathan Heese wrote:
> > Hello,
> >
> > So I resolved my previous issue with split-brains and the lack of
> > self-healing by dropping my installed glusterfs* packages from 3.6.2
> > to 3.5.3, but now I've picked up a new issue, which actually makes
> > normal use of the volume practically impossible.
> >
> > A little background for those not already paying close attention:
> > I have a 2 node 2 brick replicating volume whose purpose in life is to
> > hold iSCSI target files, primarily for use to provide datastores to a
> > VMware ESXi cluster. The plan is to put a handful of image files on
> > the Gluster volume, mount them locally on both Gluster nodes, and run
> > tgtd on both, pointed to the image files on the mounted gluster
> > volume. Then the ESXi boxes will use multipath (active/passive) iSCSI
> > to connect to the nodes, with automatic failover in case of planned or
> > unplanned downtime of the Gluster nodes.
> >
> > In my most recent round of testing with 3.5.3, I'm seeing a
massive
> > failure to write data to the volume after about 5-10 minutes, so
I've
> > simplified the scenario a bit (to minimize the variables) to: both
> > Gluster nodes up, only one node (duke) mounted and running tgtd, and
> > just regular (single path) iSCSI from a single ESXi server.
> >
> > About 5-10 minutes into migration a VM onto the test datastore,
> > /var/log/messages on duke gets blasted with a ton of messages exactly
> > like this:
> >
> > Mar 15 22:24:06 duke tgtd: bs_rdwr_request(180) io error 0x1781e00 2a
> > -1 512 22971904, Input/output error
> >
> >
> > And /var/log/glusterfs/mnt-gluster_disk.log gets blased with a ton of
> > messages exactly like this:
> >
> > [2015-03-16 02:24:07.572279] W [fuse-bridge.c:2242:fuse_writev_cbk]
> > 0-glusterfs-fuse: 635299: WRITE => -1 (Input/output error)
> >
> >
>
> Are there any messages in the mount log from AFR about split-brain just
> before the above line appears?
> Does `gluster v heal <VOLNAME> info` show any files? Performing I/O
on
> files that are in split-brain fail with EIO.
>
> -Ravi
>
> > And the write operation from VMware's side fails as soon as these
> > messages start.
> >
> >
> > I don't see any other errors (in the log files I know of)
indicating
> > the root cause of these i/o errors. I'm sure that this is not
enough
> > information to tell what's going on, but can anyone help me figure
out
> > what to look at next to figure this out?
> >
> >
> > I've also considered using Dan Lambright's libgfapi gluster
module for
> > tgtd (or something similar) to avoid going through FUSE, but I'm
not
> > sure whether that would be irrelevant to this problem, since I'm
not
> > 100% sure if it lies in FUSE or elsewhere.
> >
> >
> > Thanks!
> >
> >
> > /Jon Heese/
> > /Systems Engineer/
> > *INetU Managed Hosting*
> > P: 610.266.7441 x 261
> > F: 610.266.7434
> > www.inetu.net<https://www.inetu.net/>
> >
> > /** This message contains confidential information, which also may be
> > privileged, and is intended only for the person(s) addressed above.
> > Any unauthorized use, distribution, copying or disclosure of
> > confidential and/or privileged information is strictly prohibited. If
> > you have received this communication in error, please erase all copies
> > of the message and its attachments and notify the sender immediately
> > via reply e-mail. **/
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/d7c640a2/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 16
> Date: Tue, 17 Mar 2015 12:52:31 +0800
> From: Punit Dambiwal <hypunit at gmail.com>
> To: "users at ovirt.org" <users at ovirt.org>,
"gluster-users at gluster.org
> "
> <gluster-users at gluster.org>, Vered Volansky <vered
at redhat.com>,
> Michal Skrivanek <michal.skrivanek at redhat.com>,
Humble
> Devassy
> Chirammal <humble.devassy at gmail.com>, Kanagaraj <
> kmayilsa at redhat.com>,
> Kaushal M <kshlmster at gmail.com>
> Subject: [Gluster-users] VM failed to start | Bad volume specification
> Message-ID:
> <
> CAGZcrBnbEGCSU4S92mKC0AGVK6B9aV0_SkCroHRL0dhVK_+3AA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> I am facing one strange issue with ovirt/glusterfs....still didn't find
> this issue is related with glusterfs or Ovirt....
>
> Ovirt :- 3.5.1
> Glusterfs :- 3.6.1
> Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks
> Guest VM :- more then 100
>
> Issue :- When i deploy this cluster first time..it work well for me(all the
> guest VM created and running successfully)....but suddenly one day my one
> of the host node rebooted and none of the VM can boot up now...and failed
> with the following error "Bad Volume Specification"
>
> VMId :- d877313c18d9783ca09b62acf5588048
>
> VDSM Logs :- http://ur1.ca/jxabi
> Engine Logs :- http://ur1.ca/jxabv
>
> ------------------------
> [root at cpu01 ~]# vdsClient -s 0 getVolumeInfo
> e732a82f-bae9-4368-8b98-dedc1c3814de 00000002-0002-0002-0002-000000000145
> 6d123509-6867-45cf-83a2-6d679b77d3c5 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
> status = OK
> domain = e732a82f-bae9-4368-8b98-dedc1c3814de
> capacity = 21474836480
> voltype = LEAF
> description > parent =
00000000-0000-0000-0000-000000000000
> format = RAW
> image = 6d123509-6867-45cf-83a2-6d679b77d3c5
> uuid = 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
> disktype = 2
> legality = LEGAL
> mtime = 0
> apparentsize = 21474836480
> truesize = 4562972672
> type = SPARSE
> children = []
> pool > ctime = 1422676305
> ---------------------
>
> I opened same thread earlier but didn't get any perfect answers to
solve
> this issue..so i reopen it...
>
> https://www.mail-archive.com/users at ovirt.org/msg25011.html
>
> Thanks,
> Punit
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/2ceff835/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 17
> Date: Tue, 17 Mar 2015 06:24:15 +0100
> From: Melkor Lord <melkor.lord at gmail.com>
> To: Joe Julian <joe at julianfamily.org>
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Working but some issues
> Message-ID:
> <CAOXaFyy=> KXAPnLNzNf5yLoaJa4A9tsd9YZ1OZCgaEZGd0YSCA at
mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Mon, Mar 16, 2015 at 5:14 AM, Joe Julian <joe at julianfamily.org>
wrote:
>
> >
> > I'll just address this one point in this email because it's
such an
> > important one. This is not just an open-source project because some
> > company's developing a product and lets you have it for free, this
is an
> > open-source *project*. We, the members of the community, are as
> > responsible for that problem as the folks that know how to write in C;
> > perhaps even more so.
> >
> > I implore you to add your skills to improving the documentation. You
have
> > the ability to see the documentation from a completely different
> > perspective from the folks that wrote the code. They may not have
> > documented --remote-host because they perhaps added it for an internal
> > reason and didn't expect users to use it. By looking at it from a
> different
> > perspective, you may see a need for documentation and have the ability
to
> > implement it.
> >
>
> I globally agree with your statement but there's a catch here, more of
a
> chicken-and-egg problem actually! In order to contribute to the
> documentation or help other users, I must first be able to understand the
> project myself! The mere documentation is missing at almost every stage in
> GlusterFS and this is problematic. If I'm not able to put GlusterFS at
use
> understanding how it works and how it interacts between all of its
> components, how am I supposed to explain it to other people?
>
> I'm a sysadmin for over 2 decades and this is the first time I see such
a
> major project (GlusterFS is not a small HelloWorld app, it's
featureful,
> complex and envolves learning some concepts) with so little documentation.
> As I said before, I currently have *no* *clue* of all the options and
> directives I can use in the main configuration file
> /etc/glusterfs/glusterd.vol for example! There's only a
"sample"
> configuration file with no further detail than the classic "paste this
into
> the file". Well no thank you ;) I won't paste anything to any
configuration
> file without understanding it first and have a complete set of directives I
> can use. I have the reponsability of having a running service and can't
> just "paste things" as told :-)
>
> The only way I got GlusterFS to work is by searching BLOG posts and this
> bad IMHO. The way I see how a project ecosystem should be managed is like
> this : The devs should not only code but provide the full information,
> documenting every single option and directive because no other than them
> know the project better than they do! After that, the ecosystem will grow
> by itself thanks to technical people that create blog posts and articles to
> explain various creative ways of using the software. The documentation from
> the devs does not have to be ultra exhaustive explaining all possible use
> cases of course but at least, document everything that needs to be
> documented to let other people understand what they are dealing with.
>
> Let me take 2 real world examples to get the general idea : Postfix and
> NGinX! They are flexible enough to provide a quite large set of use cases.
> Their documentation is impeccable from my point of view. They provide an
> exhaustive documentation of their inner options like this -
> http://www.postfix.org/postconf.5.html - and this -
> http://nginx.org/en/docs/dirindex.html
>
> See, even if you forget all the HowTo's, articles and stuff, which are
> great additions and bonuses, you can manage to get out of the woods with
> these docs. That's exactly what I miss most in GlusterFS. There are
here
> and there options explained but often with no context.
>
> To the very specific case of "--remote-host" option, there's
a design
> problem in "gluster" command. Launching it without arguments and
you get a
> prompt and the command completion helps a bit. Now, try "gluster
-h" (or
> --help or -? or whatever) and you end up with "unrecognized option
--XXX".
> This is counter intuitive again. You can't experiment by trial and
error to
> figure out things when you're in the dark, that's why I had to take
a peek
> to the source code and find out the existence of other options.
>
> If you spend so much time trying to find information instead of
> experimenting with a project, you may grow bored and leave. This would be
> bad because the lack of documentation may lead people to avoid a project
> which could be really useful and good! GlusterFS features exactly what I
> want for my usage, that's why I picked it up but I didn't think it
would be
> so hard to get proper documentation.
>
> For example, I can't get SSL to work with my 3.6.2 setup and
there's not a
> single bit of doc about it. There's only
> http://blog.gluster.org/author/zbyszek/ but even after following the
> necessary steps, I end up with a cryptic log entry "Cannot
authenticate
> client from fs-00-22666-2015/03/16-11:42:54:167984-Data-client-0-0-0
3.6.2"
> and repeated for all the replicas :-( I don't know what GlusterFS
expects
> in this case so I can't solve the problem for now.
>
> I'll stop boring you now, you get the point ;) You can only explain
what
> you clearly understand and for now, this is still way too foogy for me :)
>
> --
> Unix _IS_ user friendly, it's just selective about who its friends are.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/b357a41e/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 18
> Date: Mon, 16 Mar 2015 23:43:12 -0700
> From: Joe Julian <joe at julianfamily.org>
> To: Melkor Lord <melkor.lord at gmail.com>
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Working but some issues
> Message-ID: <5507CD00.1000503 at julianfamily.org>
> Content-Type: text/plain; charset="utf-8";
Format="flowed"
>
> On 03/16/2015 10:24 PM, Melkor Lord wrote:
> >
> > On Mon, Mar 16, 2015 at 5:14 AM, Joe Julian <joe at
julianfamily.org
> > <mailto:joe at julianfamily.org>> wrote:
> >
> >
> > I'll just address this one point in this email because
it's such
> > an important one. This is not just an open-source project because
> > some company's developing a product and lets you have it for
free,
> > this is an open-source *project*. We, the members of the
> > community, are as responsible for that problem as the folks that
> > know how to write in C; perhaps even more so.
> >
> > I implore you to add your skills to improving the documentation.
> > You have the ability to see the documentation from a completely
> > different perspective from the folks that wrote the code. They may
> > not have documented --remote-host because they perhaps added it
> > for an internal reason and didn't expect users to use it. By
> > looking at it from a different perspective, you may see a need for
> > documentation and have the ability to implement it.
> >
> >
> > I globally agree with your statement but there's a catch here,
more of
> > a chicken-and-egg problem actually! In order to contribute to the
> > documentation or help other users, I must first be able to understand
> > the project myself! The mere documentation is missing at almost every
> > stage in GlusterFS and this is problematic. If I'm not able to put
> > GlusterFS at use understanding how it works and how it interacts
> > between all of its components, how am I supposed to explain it to
> > other people?
>
> Good question. It took me months to figure all that out (with far less
> documentation than there is now) and even with pictures and arrows and
> 8x10 glossies with a paragraph on the back of each one, people still
> have a hard time getting it. That's why so much effort has gone in to
> making a cli that does it all for you and doesn't require you to be a
> storage engineer to use it.
>
> >
> > I'm a sysadmin for over 2 decades and this is the first time I see
> > such a major project (GlusterFS is not a small HelloWorld app,
it's
> > featureful, complex and envolves learning some concepts) with so
> > little documentation.
>
> I'll split this out as I think you're unaware of the admin guide
that's
> pretty detailed and is, at least, published with the source code (it may
> be on the gluster.org site somewhere, but I'm too tired right now to
> look). The source can readily be found on github
> <
>
https://github.com/GlusterFS/glusterfs/tree/master/doc/admin-guide/en-US/markdown
> >.
>
> > As I said before, I currently have *no* *clue* of all the options and
> > directives I can use in the main configuration file
> > /etc/glusterfs/glusterd.vol for example!
>
> Right. There's nearly no need to mess with that except under the lesser
> circumstance that an unprivileged user needs access to the management port.
>
> > There's only a "sample" configuration file with no
further detail than
> > the classic "paste this into the file". Well no thank you ;)
I won't
> > paste anything to any configuration file without understanding it
> > first and have a complete set of directives I can use. I have the
> > reponsability of having a running service and can't just
"paste
> > things" as told :-)
> >
> > The only way I got GlusterFS to work is by searching BLOG posts and
> > this bad IMHO. The way I see how a project ecosystem should be managed
> > is like this : The devs should not only code but provide the full
> > information, documenting every single option and directive because no
> > other than them know the project better than they do! After that, the
> > ecosystem will grow by itself thanks to technical people that create
> > blog posts and articles to explain various creative ways of using the
> > software. The documentation from the devs does not have to be ultra
> > exhaustive explaining all possible use cases of course but at least,
> > document everything that needs to be documented to let other people
> > understand what they are dealing with.
>
> That is the way it's done, btw. The developers are required to document
> their features before a release.
>
> > Let me take 2 real world examples to get the general idea : Postfix
> > and NGinX! They are flexible enough to provide a quite large set of
> > use cases. Their documentation is impeccable from my point of view.
> > They provide an exhaustive documentation of their inner options like
> > this - http://www.postfix.org/postconf.5.html - and this -
> > http://nginx.org/en/docs/dirindex.html
>
> Postfix, 16 years old, and hasn't always had very detailed
> documentation. Do you also remember when it was worse than sendmail's?
>
> I could counter with any of the myriad of horrible documentation for
> some of the most popular software systems out there only to point out
> that Gluster's isn't all that bad by comparison to a great many of
its
> peers.
>
> >
> > See, even if you forget all the HowTo's, articles and stuff, which
are
> > great additions and bonuses, you can manage to get out of the woods
> > with these docs. That's exactly what I miss most in GlusterFS.
There
> > are here and there options explained but often with no context.
> >
> > To the very specific case of "--remote-host" option,
there's a design
> > problem in "gluster" command. Launching it without arguments
and you
> > get a prompt and the command completion helps a bit. Now, try
"gluster
> > -h" (or --help or -? or whatever) and you end up with
"unrecognized
> > option --XXX". This is counter intuitive again. You can't
experiment
> > by trial and error to figure out things when you're in the dark,
> > that's why I had to take a peek to the source code and find out
the
> > existence of other options.
>
> "gluster help" is pretty intuitive, imho, as is
>
> # gluster
> gluster> help
>
> and the more detailed than any other software I can think of, "gluster
> volume set help" which has all the settings you can tweak in your
volume
> along with their description equivalent to that postfix document.
>
> > If you spend so much time trying to find information instead of
> > experimenting with a project, you may grow bored and leave.
>
> Agreed, and that's something that the gluster.org web site's been
> failing at since the last 1 or 2 web site revamps.
>
> > This would be bad because the lack of documentation may lead people to
> > avoid a project which could be really useful and good! GlusterFS
> > features exactly what I want for my usage, that's why I picked it
up
> > but I didn't think it would be so hard to get proper
documentation.
> >
> > For example, I can't get SSL to work with my 3.6.2 setup and
there's
> > not a single bit of doc about it. There's only
> > http://blog.gluster.org/author/zbyszek/ but even after following the
> > necessary steps, I end up with a cryptic log entry "Cannot
> > authenticate client from
> > fs-00-22666-2015/03/16-11:42:54:167984-Data-client-0-0-0 3.6.2"
and
> > repeated for all the replicas :-( I don't know what GlusterFS
expects
> > in this case so I can't solve the problem for now.
>
>
>
https://github.com/GlusterFS/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_ssl.md
> <-- not a blog
>
> >
> > I'll stop boring you now, you get the point ;) You can only
explain
> > what you clearly understand and for now, this is still way too foogy
> > for me :)
> >
>
> Useful and eloquent perspectives and bits that infra is looking at
> rectifying. The web site is covered in too many words. The "Getting
> Started" entry has 13 sub-entries. That's not "getting
started" that's
> "tl;dr". A new vision is being put together that will try to not
just
> build a fancy web thingy, but will define goals such as usability,
> engagement, community interfacing, that kind of stuff - and measure the
> effectiveness of the changes that are made. It'll be change for the
sake
> of improvement rather than just change for the sake of change.
>
> > --
> > Unix _IS_ user friendly, it's just selective about who its friends
are.
>
> But I still say you should still document things you find in the source
> if they aren't documented - since you're in there anyway. :-D
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150316/eb2aa98f/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 19
> Date: Tue, 17 Mar 2015 16:09:47 +0800
> From: Punit Dambiwal <hypunit at gmail.com>
> To: "gluster-users at gluster.org" <gluster-users at
gluster.org>, Kaushal M
> <kshlmster at gmail.com>, Kanagaraj <kmayilsa at
redhat.com>, Vijay
> Bellur
> <vbellur at redhat.com>
> Subject: [Gluster-users] Bad volume specification | Connection timeout
> Message-ID:
> <CAGZcrB=n=D0VUuPXLUONOKbJptMjv7=M8QuOJQ> QNVSTuG248g at
mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> I am facing one strange issue with ovirt/glusterfs....still didn't find
> this issue is related with glusterfs or Ovirt....
>
> Ovirt :- 3.5.1
> Glusterfs :- 3.6.1
> Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks
> Guest VM :- more then 100
>
> Issue :- When i deploy this cluster first time..it work well for me(all the
> guest VM created and running successfully)....but suddenly one day my one
> of the host node rebooted and none of the VM can boot up now...and failed
> with the following error "Bad Volume Specification"
>
> VMId :- d877313c18d9783ca09b62acf5588048
>
> VDSM Logs :- http://ur1.ca/jxabi
> Engine Logs :- http://ur1.ca/jxabv
>
> ------------------------
> [root at cpu01 ~]# vdsClient -s 0 getVolumeInfo
> e732a82f-bae9-4368-8b98-dedc1c3814de 00000002-0002-0002-0002-000000000145
> 6d123509-6867-45cf-83a2-6d679b77d3c5 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
> status = OK
> domain = e732a82f-bae9-4368-8b98-dedc1c3814de
> capacity = 21474836480
> voltype = LEAF
> description > parent =
00000000-0000-0000-0000-000000000000
> format = RAW
> image = 6d123509-6867-45cf-83a2-6d679b77d3c5
> uuid = 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
> disktype = 2
> legality = LEGAL
> mtime = 0
> apparentsize = 21474836480
> truesize = 4562972672
> type = SPARSE
> children = []
> pool > ctime = 1422676305
> ---------------------
>
> I opened same thread earlier but didn't get any perfect answers to
solve
> this issue..so i reopen it...
>
> https://www.mail-archive.com/users at ovirt.org/msg25011.html
>
> Thanks,
> Punit
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/eb027eaf/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 20
> Date: Tue, 17 Mar 2015 10:20:12 +0100
> From: Pierre L?onard <pleonard at jouy.inra.fr>
> To: aytac zeren <aytaczeren at gmail.com>, "gluster-users at
gluster.org"
> <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <5507F1CC.5080209 at jouy.inra.fr>
> Content-Type: text/plain; charset="utf-8";
Format="flowed"
>
> Hi aytac zeren,
> > In order to have a healthy volume, you need to have bricks as factors
> > of 4. Which means if you have 4 bricks and want to extend your setup
> > with additional brick, then you will need 4 other bricks in your
> > cluster, for your setup.
> >
> > I don't recommend to host more than one brick on a host as it
would
> > cause data loss on failure of the node, if your master and redundant
> > copy is stored on the same host.
> > Regards
> > Aytac
> Yes I understand. But You can understand and I hope that the gluster
> team can understand, that if I want to expand my computational power and
> storage I can't buy four node In one time, it's to heavy.
> Another example, I have a big cluster with 14 node stripe 7. I can't by
> 14 other nodes to expand it in one time. It's a big limitation in the
> flexible usage of glusterfs.
>
> Does that mean that glusterfs is only dedicated to small cluster ?
>
> Sincerely.
> --
> Signature electronique
> INRA <http://www.inra.fr>
> *Pierre L?onard*
> *Senior IT Manager*
> *MetaGenoPolis*
> Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> T?l. : +33 (0)1 34 65 29 78
>
> Centre de recherche INRA
> Domaine de Vilvert ? B?t. 325 R+1
> 78 352 Jouy-en-Josas CEDEX
> France
> www.mgps.eu <http://www.mgps.eu>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/9bb6d373/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: logo-INRA2013.gif
> Type: image/gif
> Size: 2560 bytes
> Desc: http://www.inra.fr/
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/9bb6d373/attachment-0001.gif
> >
>
> ------------------------------
>
> Message: 21
> Date: Tue, 17 Mar 2015 10:38:04 +0100
> From: Niels de Vos <ndevos at redhat.com>
> To: gluster-users at gluster.org, gluster-devel at gluster.org
> Subject: [Gluster-users] REMINDER: Gluster Community Bug Triage
> meeting today at 12:00 UTC
> Message-ID: <20150317093804.GJ29220 at
ndevos-x240.usersys.redhat.com>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi all,
>
> This meeting is scheduled for anyone that is interested in learning more
> about, or assisting with the Bug Triage.
>
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> ( https://webchat.freenode.net/?channels=gluster-meeting )
> - date: every Tuesday
> - time: 12:00 UTC, 13:00 CET
> (in your terminal, run: date -d "12:00 UTC")
> - agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
>
> Currently the following items are listed:
> * Roll Call
> * Status of last weeks action items
> * Group Triage
> * Open Floor
>
> The last two topics have space for additions. If you have a suitable bug
> or topic to discuss, please add it to the agenda.
>
> Thanks,
> Niels
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 181 bytes
> Desc: not available
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/df268b1d/attachment-0001.sig
> >
>
> ------------------------------
>
> Message: 22
> Date: Tue, 17 Mar 2015 11:13:12 +0100
> From: JF Le Fill?tre <jean-francois.lefillatre at uni.lu>
> To: <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <5507FE38.30507 at uni.lu>
> Content-Type: text/plain; charset="utf-8"
>
>
> Hello Pierre,
>
> I see your point and I understand your arguments. But as it happens, the
> way you create your volumes has an impact on what you can do afterward,
> and how you can expand them. *It has nothing to do with GlusterFS
> itself, nor the developers or the community, it's all in your
> architectural choices in the beginning.*
>
> It would be the same problem with many other distributed FS, or even
> some RAID setups. That is the kind of thing that you have to think about
> in the very beginning, and that will have consequences later.
>
> Aytac's argument doesn't apply to you as you don't have a
replicated
> volume, but a distributed+striped one. So if you lose one of your
> original 4 nodes, you lose half of one of your stripes, and that will
> make the files on that stripe inaccessible anyway. If you lose a node
> with both bricks in the stripe, well the result will be the same. So I
> don't think that the colocation of striped bricks on the same host
> matters much in your case, outside of performance concerns.
>
> As for your 14-node cluster, if it's again a distributed+striped
volume,
> then it's a 2 ? 7 volume (distributed over 2 brick groups, each being a
> 7-disk stripe). To extend it, you will have to add 7 more bricks to
> create a new striped brick group, and your volume will be transformed
> into a 3 ? 7 one.
>
> I would advise you to read carefully the master documentation pertaining
> to volume creation and architecture. Maybe it will help you understand
> better the way things work and the impacts of your choices:
>
>
>
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
>
> Thanks,
> JF
>
>
> On 17/03/15 10:20, Pierre L?onard wrote:
> > Hi aytac zeren,
> >> In order to have a healthy volume, you need to have bricks as
factors
> >> of 4. Which means if you have 4 bricks and want to extend your
setup
> >> with additional brick, then you will need 4 other bricks in your
> >> cluster, for your setup.
> >>
> >> I don't recommend to host more than one brick on a host as it
would
> >> cause data loss on failure of the node, if your master and
redundant
> >> copy is stored on the same host.
> >> Regards
> >> Aytac
> > Yes I understand. But You can understand and I hope that the gluster
> > team can understand, that if I want to expand my computational power
and
> > storage I can't buy four node In one time, it's to heavy.
> > Another example, I have a big cluster with 14 node stripe 7. I
can't by
> > 14 other nodes to expand it in one time. It's a big limitation in
the
> > flexible usage of glusterfs.
> >
> > Does that mean that glusterfs is only dedicated to small cluster ?
> >
> > Sincerely.
> > --
> > Signature electronique
> > INRA <http://www.inra.fr>
> >
> > *Pierre L?onard*
> > *Senior IT Manager*
> > *MetaGenoPolis*
> > Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> > T?l. : +33 (0)1 34 65 29 78
> >
> > Centre de recherche INRA
> > Domaine de Vilvert ? B?t. 325 R+1
> > 78 352 Jouy-en-Josas CEDEX
> > France
> > www.mgps.eu <http://www.mgps.eu>
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>
> --
>
> Jean-Fran?ois Le Fill?tre
> -------------------------------
> HPC Systems Administrator
> LCSB - University of Luxembourg
> -------------------------------
> PGP KeyID 0x134657C6
>
>
> ------------------------------
>
> Message: 23
> Date: Tue, 17 Mar 2015 11:48:35 +0100
> From: Pierre L?onard <pleonard at jouy.inra.fr>
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] adding a node
> Message-ID: <55080683.50300 at jouy.inra.fr>
> Content-Type: text/plain; charset="utf-8";
Format="flowed"
>
> Hi JF,
> > Hello Pierre,
> >
> > I see your point and I understand your arguments. But as it happens,
the
> > way you create your volumes has an impact on what you can do
afterward,
> > and how you can expand them. *It has nothing to do with GlusterFS
> > itself, nor the developers or the community, it's all in your
> > architectural choices in the beginning.*
> Yes Theoretical you are right. Here at INRA, we are in a specific unit
> with a constant growing. But the budget is dedicated annually.
> So I have build three clusters dedicated to different type of
> computations and when the new plateform will need storage I have to
> either build a new cluster or by a complete set of Four node or seven
> node depending of my budget.
> Our unit is not in a data-center profile for the computation, the
> evolution is a continue process.
>
>
> > It would be the same problem with many other distributed FS, or even
> > some RAID setups. That is the kind of thing that you have to think
about
> > in the very beginning, and that will have consequences later.
> Yes, that's the reason we have a scratch zone dedicated for the
> computation and intermediate files, that is not secured all the user
> know that it's like your DD in your PC. And other volume secured with
> raid 5 and replica.
> I think that the choice ar good for three years, the time for us to get
> the first DD problems.
>
> > As for your 14-node cluster, if it's again a distributed+striped
volume,
> > then it's a 2 ? 7 volume (distributed over 2 brick groups, each
being a
> > 7-disk stripe). To extend it, you will have to add 7 more bricks to
> > create a new striped brick group, and your volume will be transformed
> > into a 3 ? 7 one.
> OK that's better.
> > I would advise you to read carefully the master documentation
pertaining
> > to volume creation and architecture. Maybe it will help you understand
> > better the way things work and the impacts of your choices:
> >
> >
>
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
> I will.
>
> Many thanks for you and All.
>
> Sincerely.
>
> --
> Signature electronique
> INRA <http://www.inra.fr>
> *Pierre L?onard*
> *Senior IT Manager*
> *MetaGenoPolis*
> Pierre.Leonard at jouy.inra.fr <mailto:Pierre.Leonard at
jouy.inra.fr>
> T?l. : +33 (0)1 34 65 29 78
>
> Centre de recherche INRA
> Domaine de Vilvert ? B?t. 325 R+1
> 78 352 Jouy-en-Josas CEDEX
> France
> www.mgps.eu <http://www.mgps.eu>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/c29f4366/attachment-0001.html
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: logo-INRA2013.gif
> Type: image/gif
> Size: 2560 bytes
> Desc: http://www.inra.fr/
> URL: <
>
http://www.gluster.org/pipermail/gluster-users/attachments/20150317/c29f4366/attachment-0001.gif
> >
>
> ------------------------------
>
> Message: 24
> Date: Tue, 17 Mar 2015 11:03:59 +0100
> From: Nico Schottelius <nico-gluster-users at schottelius.org>
> To: Joe Julian <joe at julianfamily.org>
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Working but some issues
> Message-ID: <20150317100359.GA1028 at schottelius.org>
> Content-Type: text/plain; charset=us-ascii
>
> Joe Julian [Mon, Mar 16, 2015 at 11:43:12PM -0700]:
> > Good question. It took me months to figure all that out (with far less
> > documentation than there is now) [...]
>
> ... just wondering: Why don't we run a kickstarter/indiegogo campaign
to
> finance people to write (from scratch?) documentation?
>
> There are various good examples of GREAT documentation for various
> projects and given that glusterfs is an evolving software that becomes
> more popular, it could be a way to improve documentation and thus
> experience with glusterfs.
>
> --
> New PGP key: 659B 0D91 E86E 7E24 FD15 69D0 C729 21A1 293F 2D24
>
>
> ------------------------------
>
> Message: 25
> Date: Tue, 17 Mar 2015 07:38:44 -0400
> From: Kaleb KEITHLEY <kkeithle at redhat.com>
> To: "gluster-users at gluster.org" <gluster-users at
gluster.org>, Gluster
> Devel <gluster-devel at gluster.org>
> Subject: [Gluster-users] GlusterFS 3.4.7beta2 is now available for
> testing
> Message-ID: <55081244.4080709 at redhat.com>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
>
> Many thanks to all our users that have reported bugs against the 3.4
> version of GlusterFS! glusterfs-3.4.7beta2 has been made available for
> testing.
>
> N.B. glusterfs-3.4.7beta1 was released but did not build with gcc-5.
>
> If you filed a bug against 3.4.x and it is listed as fixed in the
> Release Notes, please test it to confirm that it is fixed. Please update
> the bug report as soon as possible if you find that it has not been
> fixed. If any assistance is needed, do not hesitate to send a request to
> the Gluster Users mailinglist (gluster-users at gluster.org) or start a
> discussion in the #gluster channel on Freenode IRC.
>
> The release notes can be found at
>
>
http://blog.gluster.org/2015/03/glusterfs-3-4-7beta2-is-now-available-for-testing/
>
> Packages for selected distributions can be found on the main download
> server at
> http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.7beta2/
>
> Thank you in advance for testing,
>
> --
>
> your friendly GlusterFS-3.4 release wrangler
>
>
>
>
>
>
>
>
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
> End of Gluster-users Digest, Vol 83, Issue 18
> *********************************************
>
--
ramos ye
yt.tienon at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20150321/4e49026a/attachment.html>