Hi and thanks,
Well, I guess I didn't explain my problem particularly well. Sorry for that.
I guess that a normal gluster user would in the beginning add the
replica bricks required and then, whenever the servers restarts, let
gluster restart using the same configuration files as before the
restart. Restarts would happen again and again, without any manual
interaction changing the storage configuration. Every time you add a
brick, you would need to sync the new brick for it to be a replica of
the other bricks and that would take some time.
My case is worse. When setting up this replicated file system, I would
do the same. However, at an early stage of every server restart I would
need to make the local brick available to my clients on all servers.
That means before the servers get in contact with each other.
At this stage, I would like to have a volume which is not replicated on
every server. After a while, these single brick volumes would be dropped
and replaced by the replicated volume I used before the restart.
All these stages (and volumes) I describe would make use of the same
physical storage place (bricks). As long as no writing is involved with
the single brick volumes, I wouldn't risk ending up in a split brain
situation when I restore the replicated volume.
Restoring the volume would be done by keeping a backup of the
configuration files which define the replicated volume. Restarting the
gluster daemons using these configuration files would restore the
replicated volume without having to add any bricks. The reason I don't
want to add the brick at each restart, is that it would mean some
penalty to sync the new brick. That new brick already contains all the
necessary files from the time before the server restart.
Sorry for this complex and strange way of using glusterfs, but my task
is to try to find a way to use it given a set of fixed use cases that we
have. I know that this is not an easy thing to do, but I would at least
want to find some way to implement this (good or bad). I think it's
possible, but it may be too inefficient to be a good solution.
So, back to my original question: Would this 'gluster volume create
myvol local_IP:/path/to/brick' affect the data or meta-data on the brick
in any way? I think that is the most critical command in the chain, as I
don't expect starting or mounting the volume to affect anything. I want
to avoid affecting the (meta-)data as I intend to restore the replicated
volume I used before the restart without gluster noticing that the brick
was actually used in another configuration in between.
Regards
Andreas
On 04/11/2015 09:18 AM, Atin Mukherjee wrote:>
> On 04/11/2015 01:21 AM, Andreas Hollaus wrote:
>> Hi,
>>
>> I wonder what happens when the command 'gluster volume
create...' is
>> executed? How is the file system on the brick affected by that command,
>> data and meta-data (extended attributes)?
>>
>> The reason for this question is that I have a strange use case where my
>> 2 (mirrored) servers are restarted. At an early stage of the reboot
>> phase, I have to create a new gluster file system on each server so
that
>> the same directory can be used for read access. Later on, I would like
>> to delete these single server volumes and replace them with the
mirrored
>> gluster volume I used before the restart.
>>
>> I guess I can restore the previous volume definition from a backup of
>> the gluster configuration files, but I'm worried that the
'gluster
>> volume create...' command might have affected the brick so that it
is in
>> a different state compared to before the restart, when the restored
>> gluster configuration was valid. I realize that for this to work, the
>> extended attributes can not change while the mirrored volume is
stopped.
>>
>> Any idea if I can use glusterfs like this or am I violating some rules?
> If I've understood your requirement correctly you are trying to convert
> a distributed volume into a replicate one. In that case you would need
> to convert it through add brick mentioned the replica count. I don't
> think you would face any issues for restoring configurations if all the
> steps are performed correctly.
>
> ~Atin
>>
>> Regards
>> Andreas
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>