Frederik Banke
2018-Apr-18  20:32 UTC
[Gluster-users] Replicated volume read request are served by remote brick
I have created a 2 brick replicated volume.
gluster> volume status
Status of volume: storage
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick master:/glusterfs/bricks/storage/mountpoint
            49153     0          Y       5301
Brick worker1:/glusterfs/bricks/storage/mountpoint
           49153     0          Y       3002
The volume is mounted like this:
On worker1 node /etc/fstab
worker1:/storage      /data/storage/       glusterfs     defaults,_netdev
0  0
On master node /etc/fstab
master:/storage      /data/storage/       glusterfs     defaults,_netdev
0  0
When I add read load(many small files) on the volume mounted on the master
node CPU usage looks like this:
On master node: glusterfs ~ 50%
On master node: glusterfsd ~ 25%
On worker1 node: glusterfsd ~ 50%
There is no other load on the servers than the read load I start.
When I inspect the glusterfsd process on worker1 using strace, it seems
like it does at least some of the file reading from this node.
Is this expected behavior? I would think that since it is a replicated
volume and read load, it would serve all the requests from the brick on the
localhost and not use the network to serve the requests.
Can anyone help to clarify my understanding of the architecture?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180418/e097c039/attachment.html>
Vlad Kopylov
2018-Apr-19  03:11 UTC
[Gluster-users] Replicated volume read request are served by remote brick
I was trying to use http://lists.gluster.org/pipermail/gluster-users/2015-June/022322.html as an example and it never worked Neither did gluster volume set <VOLNAME> cluster.nufa enable on with cluster.choose-local: on cluster.nufa: on It still reads data from network bricks. Was thinking to block inter-server network access for reading somehow, but seemed too suicidal. Suggestions welcome. On Wed, Apr 18, 2018 at 4:32 PM, Frederik Banke <info at patch.dk> wrote:> I have created a 2 brick replicated volume. > > gluster> volume status > Status of volume: storage > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick master:/glusterfs/bricks/storage/mountpoint > 49153 0 Y 5301 > Brick worker1:/glusterfs/bricks/storage/mountpoint > 49153 0 Y 3002 > > The volume is mounted like this: > On worker1 node /etc/fstab > worker1:/storage /data/storage/ glusterfs defaults,_netdev > 0 0 > > On master node /etc/fstab > master:/storage /data/storage/ glusterfs defaults,_netdev > 0 0 > > When I add read load(many small files) on the volume mounted on the master > node CPU usage looks like this: > On master node: glusterfs ~ 50% > On master node: glusterfsd ~ 25% > > On worker1 node: glusterfsd ~ 50% > > There is no other load on the servers than the read load I start. > When I inspect the glusterfsd process on worker1 using strace, it seems > like it does at least some of the file reading from this node. > > Is this expected behavior? I would think that since it is a replicated > volume and read load, it would serve all the requests from the brick on the > localhost and not use the network to serve the requests. > > Can anyone help to clarify my understanding of the architecture? > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180418/b5d3dbc6/attachment.html>
Reasonably Related Threads
- Can't heal a volume: "Please check if all brick processes are running."
- Sporadic Bus error on mmap() on FUSE mount
- nufa and missing files
- Can't heal a volume: "Please check if all brick processes are running."
- Can't heal a volume: "Please check if all brick processes are running."